Jan 31 04:20:15 np0005603787 kernel: Linux version 5.14.0-665.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026
Jan 31 04:20:15 np0005603787 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 31 04:20:15 np0005603787 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 31 04:20:15 np0005603787 kernel: BIOS-provided physical RAM map:
Jan 31 04:20:15 np0005603787 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 31 04:20:15 np0005603787 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 31 04:20:15 np0005603787 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 31 04:20:15 np0005603787 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 31 04:20:15 np0005603787 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 31 04:20:15 np0005603787 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 31 04:20:15 np0005603787 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 31 04:20:15 np0005603787 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 31 04:20:15 np0005603787 kernel: NX (Execute Disable) protection: active
Jan 31 04:20:15 np0005603787 kernel: APIC: Static calls initialized
Jan 31 04:20:15 np0005603787 kernel: SMBIOS 2.8 present.
Jan 31 04:20:15 np0005603787 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 31 04:20:15 np0005603787 kernel: Hypervisor detected: KVM
Jan 31 04:20:15 np0005603787 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 31 04:20:15 np0005603787 kernel: kvm-clock: using sched offset of 5388350076 cycles
Jan 31 04:20:15 np0005603787 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 31 04:20:15 np0005603787 kernel: tsc: Detected 2799.998 MHz processor
Jan 31 04:20:15 np0005603787 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 31 04:20:15 np0005603787 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 31 04:20:15 np0005603787 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 31 04:20:15 np0005603787 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 31 04:20:15 np0005603787 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 31 04:20:15 np0005603787 kernel: Using GB pages for direct mapping
Jan 31 04:20:15 np0005603787 kernel: RAMDISK: [mem 0x2d410000-0x329fffff]
Jan 31 04:20:15 np0005603787 kernel: ACPI: Early table checksum verification disabled
Jan 31 04:20:15 np0005603787 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 31 04:20:15 np0005603787 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 04:20:15 np0005603787 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 04:20:15 np0005603787 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 04:20:15 np0005603787 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 31 04:20:15 np0005603787 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 04:20:15 np0005603787 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 04:20:15 np0005603787 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 31 04:20:15 np0005603787 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 31 04:20:15 np0005603787 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 31 04:20:15 np0005603787 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 31 04:20:15 np0005603787 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 31 04:20:15 np0005603787 kernel: No NUMA configuration found
Jan 31 04:20:15 np0005603787 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 31 04:20:15 np0005603787 kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Jan 31 04:20:15 np0005603787 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 31 04:20:15 np0005603787 kernel: Zone ranges:
Jan 31 04:20:15 np0005603787 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 31 04:20:15 np0005603787 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 31 04:20:15 np0005603787 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 31 04:20:15 np0005603787 kernel:  Device   empty
Jan 31 04:20:15 np0005603787 kernel: Movable zone start for each node
Jan 31 04:20:15 np0005603787 kernel: Early memory node ranges
Jan 31 04:20:15 np0005603787 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 31 04:20:15 np0005603787 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 31 04:20:15 np0005603787 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 31 04:20:15 np0005603787 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 31 04:20:15 np0005603787 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 31 04:20:15 np0005603787 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 31 04:20:15 np0005603787 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 31 04:20:15 np0005603787 kernel: ACPI: PM-Timer IO Port: 0x608
Jan 31 04:20:15 np0005603787 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 31 04:20:15 np0005603787 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 31 04:20:15 np0005603787 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 31 04:20:15 np0005603787 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 31 04:20:15 np0005603787 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 31 04:20:15 np0005603787 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 31 04:20:15 np0005603787 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 31 04:20:15 np0005603787 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 31 04:20:15 np0005603787 kernel: TSC deadline timer available
Jan 31 04:20:15 np0005603787 kernel: CPU topo: Max. logical packages:   8
Jan 31 04:20:15 np0005603787 kernel: CPU topo: Max. logical dies:       8
Jan 31 04:20:15 np0005603787 kernel: CPU topo: Max. dies per package:   1
Jan 31 04:20:15 np0005603787 kernel: CPU topo: Max. threads per core:   1
Jan 31 04:20:15 np0005603787 kernel: CPU topo: Num. cores per package:     1
Jan 31 04:20:15 np0005603787 kernel: CPU topo: Num. threads per package:   1
Jan 31 04:20:15 np0005603787 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 31 04:20:15 np0005603787 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 31 04:20:15 np0005603787 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 31 04:20:15 np0005603787 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 31 04:20:15 np0005603787 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 31 04:20:15 np0005603787 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 31 04:20:15 np0005603787 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 31 04:20:15 np0005603787 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 31 04:20:15 np0005603787 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 31 04:20:15 np0005603787 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 31 04:20:15 np0005603787 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 31 04:20:15 np0005603787 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 31 04:20:15 np0005603787 kernel: Booting paravirtualized kernel on KVM
Jan 31 04:20:15 np0005603787 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 31 04:20:15 np0005603787 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 31 04:20:15 np0005603787 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 31 04:20:15 np0005603787 kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 31 04:20:15 np0005603787 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 31 04:20:15 np0005603787 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64", will be passed to user space.
Jan 31 04:20:15 np0005603787 kernel: random: crng init done
Jan 31 04:20:15 np0005603787 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 31 04:20:15 np0005603787 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 31 04:20:15 np0005603787 kernel: Fallback order for Node 0: 0 
Jan 31 04:20:15 np0005603787 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 31 04:20:15 np0005603787 kernel: Policy zone: Normal
Jan 31 04:20:15 np0005603787 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 31 04:20:15 np0005603787 kernel: software IO TLB: area num 8.
Jan 31 04:20:15 np0005603787 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 31 04:20:15 np0005603787 kernel: ftrace: allocating 49438 entries in 194 pages
Jan 31 04:20:15 np0005603787 kernel: ftrace: allocated 194 pages with 3 groups
Jan 31 04:20:15 np0005603787 kernel: Dynamic Preempt: voluntary
Jan 31 04:20:15 np0005603787 kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 31 04:20:15 np0005603787 kernel: rcu: #011RCU event tracing is enabled.
Jan 31 04:20:15 np0005603787 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 31 04:20:15 np0005603787 kernel: #011Trampoline variant of Tasks RCU enabled.
Jan 31 04:20:15 np0005603787 kernel: #011Rude variant of Tasks RCU enabled.
Jan 31 04:20:15 np0005603787 kernel: #011Tracing variant of Tasks RCU enabled.
Jan 31 04:20:15 np0005603787 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 31 04:20:15 np0005603787 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 31 04:20:15 np0005603787 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 31 04:20:15 np0005603787 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 31 04:20:15 np0005603787 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 31 04:20:15 np0005603787 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 31 04:20:15 np0005603787 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 31 04:20:15 np0005603787 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 31 04:20:15 np0005603787 kernel: Console: colour VGA+ 80x25
Jan 31 04:20:15 np0005603787 kernel: printk: console [ttyS0] enabled
Jan 31 04:20:15 np0005603787 kernel: ACPI: Core revision 20230331
Jan 31 04:20:15 np0005603787 kernel: APIC: Switch to symmetric I/O mode setup
Jan 31 04:20:15 np0005603787 kernel: x2apic enabled
Jan 31 04:20:15 np0005603787 kernel: APIC: Switched APIC routing to: physical x2apic
Jan 31 04:20:15 np0005603787 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 31 04:20:15 np0005603787 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Jan 31 04:20:15 np0005603787 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 31 04:20:15 np0005603787 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 31 04:20:15 np0005603787 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 31 04:20:15 np0005603787 kernel: mitigations: Enabled attack vectors: user_kernel, user_user, guest_host, guest_guest, SMT mitigations: auto
Jan 31 04:20:15 np0005603787 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 31 04:20:15 np0005603787 kernel: Spectre V2 : Mitigation: Retpolines
Jan 31 04:20:15 np0005603787 kernel: RETBleed: Mitigation: untrained return thunk
Jan 31 04:20:15 np0005603787 kernel: Speculative Return Stack Overflow: Mitigation: SMT disabled
Jan 31 04:20:15 np0005603787 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 31 04:20:15 np0005603787 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 31 04:20:15 np0005603787 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 31 04:20:15 np0005603787 kernel: active return thunk: retbleed_return_thunk
Jan 31 04:20:15 np0005603787 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 31 04:20:15 np0005603787 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 31 04:20:15 np0005603787 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 31 04:20:15 np0005603787 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 31 04:20:15 np0005603787 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 31 04:20:15 np0005603787 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 31 04:20:15 np0005603787 kernel: Freeing SMP alternatives memory: 40K
Jan 31 04:20:15 np0005603787 kernel: pid_max: default: 32768 minimum: 301
Jan 31 04:20:15 np0005603787 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 31 04:20:15 np0005603787 kernel: landlock: Up and running.
Jan 31 04:20:15 np0005603787 kernel: Yama: becoming mindful.
Jan 31 04:20:15 np0005603787 kernel: SELinux:  Initializing.
Jan 31 04:20:15 np0005603787 kernel: LSM support for eBPF active
Jan 31 04:20:15 np0005603787 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 31 04:20:15 np0005603787 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 31 04:20:15 np0005603787 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 31 04:20:15 np0005603787 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 31 04:20:15 np0005603787 kernel: ... version:                0
Jan 31 04:20:15 np0005603787 kernel: ... bit width:              48
Jan 31 04:20:15 np0005603787 kernel: ... generic registers:      6
Jan 31 04:20:15 np0005603787 kernel: ... value mask:             0000ffffffffffff
Jan 31 04:20:15 np0005603787 kernel: ... max period:             00007fffffffffff
Jan 31 04:20:15 np0005603787 kernel: ... fixed-purpose events:   0
Jan 31 04:20:15 np0005603787 kernel: ... event mask:             000000000000003f
Jan 31 04:20:15 np0005603787 kernel: signal: max sigframe size: 1776
Jan 31 04:20:15 np0005603787 kernel: rcu: Hierarchical SRCU implementation.
Jan 31 04:20:15 np0005603787 kernel: rcu: #011Max phase no-delay instances is 400.
Jan 31 04:20:15 np0005603787 kernel: smp: Bringing up secondary CPUs ...
Jan 31 04:20:15 np0005603787 kernel: smpboot: x86: Booting SMP configuration:
Jan 31 04:20:15 np0005603787 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 31 04:20:15 np0005603787 kernel: smp: Brought up 1 node, 8 CPUs
Jan 31 04:20:15 np0005603787 kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Jan 31 04:20:15 np0005603787 kernel: node 0 deferred pages initialised in 21ms
Jan 31 04:20:15 np0005603787 kernel: Memory: 7763476K/8388068K available (16384K kernel code, 5801K rwdata, 13928K rodata, 4196K init, 7192K bss, 618408K reserved, 0K cma-reserved)
Jan 31 04:20:15 np0005603787 kernel: devtmpfs: initialized
Jan 31 04:20:15 np0005603787 kernel: x86/mm: Memory block size: 128MB
Jan 31 04:20:15 np0005603787 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 31 04:20:15 np0005603787 kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 31 04:20:15 np0005603787 kernel: pinctrl core: initialized pinctrl subsystem
Jan 31 04:20:15 np0005603787 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 31 04:20:15 np0005603787 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 31 04:20:15 np0005603787 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 31 04:20:15 np0005603787 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 31 04:20:15 np0005603787 kernel: audit: initializing netlink subsys (disabled)
Jan 31 04:20:15 np0005603787 kernel: audit: type=2000 audit(1769851213.189:1): state=initialized audit_enabled=0 res=1
Jan 31 04:20:15 np0005603787 kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 31 04:20:15 np0005603787 kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 31 04:20:15 np0005603787 kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 31 04:20:15 np0005603787 kernel: cpuidle: using governor menu
Jan 31 04:20:15 np0005603787 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 31 04:20:15 np0005603787 kernel: PCI: Using configuration type 1 for base access
Jan 31 04:20:15 np0005603787 kernel: PCI: Using configuration type 1 for extended access
Jan 31 04:20:15 np0005603787 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 31 04:20:15 np0005603787 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 31 04:20:15 np0005603787 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 31 04:20:15 np0005603787 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 31 04:20:15 np0005603787 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 31 04:20:15 np0005603787 kernel: Demotion targets for Node 0: null
Jan 31 04:20:15 np0005603787 kernel: cryptd: max_cpu_qlen set to 1000
Jan 31 04:20:15 np0005603787 kernel: ACPI: Added _OSI(Module Device)
Jan 31 04:20:15 np0005603787 kernel: ACPI: Added _OSI(Processor Device)
Jan 31 04:20:15 np0005603787 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 31 04:20:15 np0005603787 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 31 04:20:15 np0005603787 kernel: ACPI: Interpreter enabled
Jan 31 04:20:15 np0005603787 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 31 04:20:15 np0005603787 kernel: ACPI: Using IOAPIC for interrupt routing
Jan 31 04:20:15 np0005603787 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 31 04:20:15 np0005603787 kernel: PCI: Using E820 reservations for host bridge windows
Jan 31 04:20:15 np0005603787 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 31 04:20:15 np0005603787 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 31 04:20:15 np0005603787 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 31 04:20:15 np0005603787 kernel: acpiphp: Slot [3] registered
Jan 31 04:20:15 np0005603787 kernel: acpiphp: Slot [4] registered
Jan 31 04:20:15 np0005603787 kernel: acpiphp: Slot [5] registered
Jan 31 04:20:15 np0005603787 kernel: acpiphp: Slot [6] registered
Jan 31 04:20:15 np0005603787 kernel: acpiphp: Slot [7] registered
Jan 31 04:20:15 np0005603787 kernel: acpiphp: Slot [8] registered
Jan 31 04:20:15 np0005603787 kernel: acpiphp: Slot [9] registered
Jan 31 04:20:15 np0005603787 kernel: acpiphp: Slot [10] registered
Jan 31 04:20:15 np0005603787 kernel: acpiphp: Slot [11] registered
Jan 31 04:20:15 np0005603787 kernel: acpiphp: Slot [12] registered
Jan 31 04:20:15 np0005603787 kernel: acpiphp: Slot [13] registered
Jan 31 04:20:15 np0005603787 kernel: acpiphp: Slot [14] registered
Jan 31 04:20:15 np0005603787 kernel: acpiphp: Slot [15] registered
Jan 31 04:20:15 np0005603787 kernel: acpiphp: Slot [16] registered
Jan 31 04:20:15 np0005603787 kernel: acpiphp: Slot [17] registered
Jan 31 04:20:15 np0005603787 kernel: acpiphp: Slot [18] registered
Jan 31 04:20:15 np0005603787 kernel: acpiphp: Slot [19] registered
Jan 31 04:20:15 np0005603787 kernel: acpiphp: Slot [20] registered
Jan 31 04:20:15 np0005603787 kernel: acpiphp: Slot [21] registered
Jan 31 04:20:15 np0005603787 kernel: acpiphp: Slot [22] registered
Jan 31 04:20:15 np0005603787 kernel: acpiphp: Slot [23] registered
Jan 31 04:20:15 np0005603787 kernel: acpiphp: Slot [24] registered
Jan 31 04:20:15 np0005603787 kernel: acpiphp: Slot [25] registered
Jan 31 04:20:15 np0005603787 kernel: acpiphp: Slot [26] registered
Jan 31 04:20:15 np0005603787 kernel: acpiphp: Slot [27] registered
Jan 31 04:20:15 np0005603787 kernel: acpiphp: Slot [28] registered
Jan 31 04:20:15 np0005603787 kernel: acpiphp: Slot [29] registered
Jan 31 04:20:15 np0005603787 kernel: acpiphp: Slot [30] registered
Jan 31 04:20:15 np0005603787 kernel: acpiphp: Slot [31] registered
Jan 31 04:20:15 np0005603787 kernel: PCI host bridge to bus 0000:00
Jan 31 04:20:15 np0005603787 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 31 04:20:15 np0005603787 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 31 04:20:15 np0005603787 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 31 04:20:15 np0005603787 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 31 04:20:15 np0005603787 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 31 04:20:15 np0005603787 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 31 04:20:15 np0005603787 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 31 04:20:15 np0005603787 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 31 04:20:15 np0005603787 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 31 04:20:15 np0005603787 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 31 04:20:15 np0005603787 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 31 04:20:15 np0005603787 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 31 04:20:15 np0005603787 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 31 04:20:15 np0005603787 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 31 04:20:15 np0005603787 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 31 04:20:15 np0005603787 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 31 04:20:15 np0005603787 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 31 04:20:15 np0005603787 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 31 04:20:15 np0005603787 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 31 04:20:15 np0005603787 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 31 04:20:15 np0005603787 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 31 04:20:15 np0005603787 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 31 04:20:15 np0005603787 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 31 04:20:15 np0005603787 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 31 04:20:15 np0005603787 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 31 04:20:15 np0005603787 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 31 04:20:15 np0005603787 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 31 04:20:15 np0005603787 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 31 04:20:15 np0005603787 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 31 04:20:15 np0005603787 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 31 04:20:15 np0005603787 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 31 04:20:15 np0005603787 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 31 04:20:15 np0005603787 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 31 04:20:15 np0005603787 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 31 04:20:15 np0005603787 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 31 04:20:15 np0005603787 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 31 04:20:15 np0005603787 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 31 04:20:15 np0005603787 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 31 04:20:15 np0005603787 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 31 04:20:15 np0005603787 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 31 04:20:15 np0005603787 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 31 04:20:15 np0005603787 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 31 04:20:15 np0005603787 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 31 04:20:15 np0005603787 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 31 04:20:15 np0005603787 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 31 04:20:15 np0005603787 kernel: iommu: Default domain type: Translated
Jan 31 04:20:15 np0005603787 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 31 04:20:15 np0005603787 kernel: SCSI subsystem initialized
Jan 31 04:20:15 np0005603787 kernel: ACPI: bus type USB registered
Jan 31 04:20:15 np0005603787 kernel: usbcore: registered new interface driver usbfs
Jan 31 04:20:15 np0005603787 kernel: usbcore: registered new interface driver hub
Jan 31 04:20:15 np0005603787 kernel: usbcore: registered new device driver usb
Jan 31 04:20:15 np0005603787 kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 31 04:20:15 np0005603787 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 31 04:20:15 np0005603787 kernel: PTP clock support registered
Jan 31 04:20:15 np0005603787 kernel: EDAC MC: Ver: 3.0.0
Jan 31 04:20:15 np0005603787 kernel: NetLabel: Initializing
Jan 31 04:20:15 np0005603787 kernel: NetLabel:  domain hash size = 128
Jan 31 04:20:15 np0005603787 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 31 04:20:15 np0005603787 kernel: NetLabel:  unlabeled traffic allowed by default
Jan 31 04:20:15 np0005603787 kernel: PCI: Using ACPI for IRQ routing
Jan 31 04:20:15 np0005603787 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 31 04:20:15 np0005603787 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 31 04:20:15 np0005603787 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 31 04:20:15 np0005603787 kernel: vgaarb: loaded
Jan 31 04:20:15 np0005603787 kernel: clocksource: Switched to clocksource kvm-clock
Jan 31 04:20:15 np0005603787 kernel: VFS: Disk quotas dquot_6.6.0
Jan 31 04:20:15 np0005603787 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 31 04:20:15 np0005603787 kernel: pnp: PnP ACPI init
Jan 31 04:20:15 np0005603787 kernel: pnp: PnP ACPI: found 5 devices
Jan 31 04:20:15 np0005603787 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 31 04:20:15 np0005603787 kernel: NET: Registered PF_INET protocol family
Jan 31 04:20:15 np0005603787 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 31 04:20:15 np0005603787 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 31 04:20:15 np0005603787 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 31 04:20:15 np0005603787 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 31 04:20:15 np0005603787 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 31 04:20:15 np0005603787 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 31 04:20:15 np0005603787 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 31 04:20:15 np0005603787 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 31 04:20:15 np0005603787 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 31 04:20:15 np0005603787 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 31 04:20:15 np0005603787 kernel: NET: Registered PF_XDP protocol family
Jan 31 04:20:15 np0005603787 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 31 04:20:15 np0005603787 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 31 04:20:15 np0005603787 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 31 04:20:15 np0005603787 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 31 04:20:15 np0005603787 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 31 04:20:15 np0005603787 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 31 04:20:15 np0005603787 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 31 04:20:15 np0005603787 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 31 04:20:15 np0005603787 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 23330 usecs
Jan 31 04:20:15 np0005603787 kernel: PCI: CLS 0 bytes, default 64
Jan 31 04:20:15 np0005603787 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 31 04:20:15 np0005603787 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 31 04:20:15 np0005603787 kernel: ACPI: bus type thunderbolt registered
Jan 31 04:20:15 np0005603787 kernel: Trying to unpack rootfs image as initramfs...
Jan 31 04:20:15 np0005603787 kernel: Initialise system trusted keyrings
Jan 31 04:20:15 np0005603787 kernel: Key type blacklist registered
Jan 31 04:20:15 np0005603787 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 31 04:20:15 np0005603787 kernel: zbud: loaded
Jan 31 04:20:15 np0005603787 kernel: integrity: Platform Keyring initialized
Jan 31 04:20:15 np0005603787 kernel: integrity: Machine keyring initialized
Jan 31 04:20:15 np0005603787 kernel: Freeing initrd memory: 88000K
Jan 31 04:20:15 np0005603787 kernel: NET: Registered PF_ALG protocol family
Jan 31 04:20:15 np0005603787 kernel: xor: automatically using best checksumming function   avx       
Jan 31 04:20:15 np0005603787 kernel: Key type asymmetric registered
Jan 31 04:20:15 np0005603787 kernel: Asymmetric key parser 'x509' registered
Jan 31 04:20:15 np0005603787 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 31 04:20:15 np0005603787 kernel: io scheduler mq-deadline registered
Jan 31 04:20:15 np0005603787 kernel: io scheduler kyber registered
Jan 31 04:20:15 np0005603787 kernel: io scheduler bfq registered
Jan 31 04:20:15 np0005603787 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 31 04:20:15 np0005603787 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 31 04:20:15 np0005603787 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 31 04:20:15 np0005603787 kernel: ACPI: button: Power Button [PWRF]
Jan 31 04:20:15 np0005603787 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 31 04:20:15 np0005603787 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 31 04:20:15 np0005603787 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 31 04:20:15 np0005603787 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 31 04:20:15 np0005603787 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 31 04:20:15 np0005603787 kernel: Non-volatile memory driver v1.3
Jan 31 04:20:15 np0005603787 kernel: rdac: device handler registered
Jan 31 04:20:15 np0005603787 kernel: hp_sw: device handler registered
Jan 31 04:20:15 np0005603787 kernel: emc: device handler registered
Jan 31 04:20:15 np0005603787 kernel: alua: device handler registered
Jan 31 04:20:15 np0005603787 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 31 04:20:15 np0005603787 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 31 04:20:15 np0005603787 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 31 04:20:15 np0005603787 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 31 04:20:15 np0005603787 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 31 04:20:15 np0005603787 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 31 04:20:15 np0005603787 kernel: usb usb1: Product: UHCI Host Controller
Jan 31 04:20:15 np0005603787 kernel: usb usb1: Manufacturer: Linux 5.14.0-665.el9.x86_64 uhci_hcd
Jan 31 04:20:15 np0005603787 kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 31 04:20:15 np0005603787 kernel: hub 1-0:1.0: USB hub found
Jan 31 04:20:15 np0005603787 kernel: hub 1-0:1.0: 2 ports detected
Jan 31 04:20:15 np0005603787 kernel: usbcore: registered new interface driver usbserial_generic
Jan 31 04:20:15 np0005603787 kernel: usbserial: USB Serial support registered for generic
Jan 31 04:20:15 np0005603787 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 31 04:20:15 np0005603787 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 31 04:20:15 np0005603787 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 31 04:20:15 np0005603787 kernel: mousedev: PS/2 mouse device common for all mice
Jan 31 04:20:15 np0005603787 kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 31 04:20:15 np0005603787 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 31 04:20:15 np0005603787 kernel: rtc_cmos 00:04: registered as rtc0
Jan 31 04:20:15 np0005603787 kernel: rtc_cmos 00:04: setting system clock to 2026-01-31T09:20:14 UTC (1769851214)
Jan 31 04:20:15 np0005603787 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 31 04:20:15 np0005603787 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 31 04:20:15 np0005603787 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 31 04:20:15 np0005603787 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 31 04:20:15 np0005603787 kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 31 04:20:15 np0005603787 kernel: usbcore: registered new interface driver usbhid
Jan 31 04:20:15 np0005603787 kernel: usbhid: USB HID core driver
Jan 31 04:20:15 np0005603787 kernel: drop_monitor: Initializing network drop monitor service
Jan 31 04:20:15 np0005603787 kernel: Initializing XFRM netlink socket
Jan 31 04:20:15 np0005603787 kernel: NET: Registered PF_INET6 protocol family
Jan 31 04:20:15 np0005603787 kernel: Segment Routing with IPv6
Jan 31 04:20:15 np0005603787 kernel: NET: Registered PF_PACKET protocol family
Jan 31 04:20:15 np0005603787 kernel: mpls_gso: MPLS GSO support
Jan 31 04:20:15 np0005603787 kernel: IPI shorthand broadcast: enabled
Jan 31 04:20:15 np0005603787 kernel: AVX2 version of gcm_enc/dec engaged.
Jan 31 04:20:15 np0005603787 kernel: AES CTR mode by8 optimization enabled
Jan 31 04:20:15 np0005603787 kernel: sched_clock: Marking stable (1920005839, 166156769)->(2268884381, -182721773)
Jan 31 04:20:15 np0005603787 kernel: registered taskstats version 1
Jan 31 04:20:15 np0005603787 kernel: Loading compiled-in X.509 certificates
Jan 31 04:20:15 np0005603787 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Jan 31 04:20:15 np0005603787 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 31 04:20:15 np0005603787 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 31 04:20:15 np0005603787 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 31 04:20:15 np0005603787 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 31 04:20:15 np0005603787 kernel: Demotion targets for Node 0: null
Jan 31 04:20:15 np0005603787 kernel: page_owner is disabled
Jan 31 04:20:15 np0005603787 kernel: Key type .fscrypt registered
Jan 31 04:20:15 np0005603787 kernel: Key type fscrypt-provisioning registered
Jan 31 04:20:15 np0005603787 kernel: Key type big_key registered
Jan 31 04:20:15 np0005603787 kernel: Key type encrypted registered
Jan 31 04:20:15 np0005603787 kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 31 04:20:15 np0005603787 kernel: Loading compiled-in module X.509 certificates
Jan 31 04:20:15 np0005603787 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Jan 31 04:20:15 np0005603787 kernel: ima: Allocated hash algorithm: sha256
Jan 31 04:20:15 np0005603787 kernel: ima: No architecture policies found
Jan 31 04:20:15 np0005603787 kernel: evm: Initialising EVM extended attributes:
Jan 31 04:20:15 np0005603787 kernel: evm: security.selinux
Jan 31 04:20:15 np0005603787 kernel: evm: security.SMACK64 (disabled)
Jan 31 04:20:15 np0005603787 kernel: evm: security.SMACK64EXEC (disabled)
Jan 31 04:20:15 np0005603787 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 31 04:20:15 np0005603787 kernel: evm: security.SMACK64MMAP (disabled)
Jan 31 04:20:15 np0005603787 kernel: evm: security.apparmor (disabled)
Jan 31 04:20:15 np0005603787 kernel: evm: security.ima
Jan 31 04:20:15 np0005603787 kernel: evm: security.capability
Jan 31 04:20:15 np0005603787 kernel: evm: HMAC attrs: 0x1
Jan 31 04:20:15 np0005603787 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 31 04:20:15 np0005603787 kernel: Running certificate verification RSA selftest
Jan 31 04:20:15 np0005603787 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 31 04:20:15 np0005603787 kernel: Running certificate verification ECDSA selftest
Jan 31 04:20:15 np0005603787 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 31 04:20:15 np0005603787 kernel: clk: Disabling unused clocks
Jan 31 04:20:15 np0005603787 kernel: Freeing unused decrypted memory: 2028K
Jan 31 04:20:15 np0005603787 kernel: Freeing unused kernel image (initmem) memory: 4196K
Jan 31 04:20:15 np0005603787 kernel: Write protecting the kernel read-only data: 30720k
Jan 31 04:20:15 np0005603787 kernel: Freeing unused kernel image (rodata/data gap) memory: 408K
Jan 31 04:20:15 np0005603787 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 31 04:20:15 np0005603787 kernel: Run /init as init process
Jan 31 04:20:15 np0005603787 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 31 04:20:15 np0005603787 systemd: Detected virtualization kvm.
Jan 31 04:20:15 np0005603787 systemd: Detected architecture x86-64.
Jan 31 04:20:15 np0005603787 systemd: Running in initrd.
Jan 31 04:20:15 np0005603787 systemd: No hostname configured, using default hostname.
Jan 31 04:20:15 np0005603787 systemd: Hostname set to <localhost>.
Jan 31 04:20:15 np0005603787 systemd: Initializing machine ID from VM UUID.
Jan 31 04:20:15 np0005603787 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 31 04:20:15 np0005603787 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 31 04:20:15 np0005603787 kernel: usb 1-1: Product: QEMU USB Tablet
Jan 31 04:20:15 np0005603787 kernel: usb 1-1: Manufacturer: QEMU
Jan 31 04:20:15 np0005603787 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 31 04:20:15 np0005603787 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 31 04:20:15 np0005603787 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 31 04:20:15 np0005603787 systemd: Queued start job for default target Initrd Default Target.
Jan 31 04:20:15 np0005603787 systemd: Started Dispatch Password Requests to Console Directory Watch.
Jan 31 04:20:15 np0005603787 systemd: Reached target Local Encrypted Volumes.
Jan 31 04:20:15 np0005603787 systemd: Reached target Initrd /usr File System.
Jan 31 04:20:15 np0005603787 systemd: Reached target Local File Systems.
Jan 31 04:20:15 np0005603787 systemd: Reached target Path Units.
Jan 31 04:20:15 np0005603787 systemd: Reached target Slice Units.
Jan 31 04:20:15 np0005603787 systemd: Reached target Swaps.
Jan 31 04:20:15 np0005603787 systemd: Reached target Timer Units.
Jan 31 04:20:15 np0005603787 systemd: Listening on D-Bus System Message Bus Socket.
Jan 31 04:20:15 np0005603787 systemd: Listening on Journal Socket (/dev/log).
Jan 31 04:20:15 np0005603787 systemd: Listening on Journal Socket.
Jan 31 04:20:15 np0005603787 systemd: Listening on udev Control Socket.
Jan 31 04:20:15 np0005603787 systemd: Listening on udev Kernel Socket.
Jan 31 04:20:15 np0005603787 systemd: Reached target Socket Units.
Jan 31 04:20:15 np0005603787 systemd: Starting Create List of Static Device Nodes...
Jan 31 04:20:15 np0005603787 systemd: Starting Journal Service...
Jan 31 04:20:15 np0005603787 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 31 04:20:15 np0005603787 systemd: Starting Apply Kernel Variables...
Jan 31 04:20:15 np0005603787 systemd: Starting Create System Users...
Jan 31 04:20:15 np0005603787 systemd: Starting Setup Virtual Console...
Jan 31 04:20:15 np0005603787 systemd: Finished Create List of Static Device Nodes.
Jan 31 04:20:15 np0005603787 systemd: Finished Apply Kernel Variables.
Jan 31 04:20:15 np0005603787 systemd: Finished Create System Users.
Jan 31 04:20:15 np0005603787 systemd-journald[308]: Journal started
Jan 31 04:20:15 np0005603787 systemd-journald[308]: Runtime Journal (/run/log/journal/85b121aca71f4df59fa20ab94d362cec) is 8.0M, max 153.6M, 145.6M free.
Jan 31 04:20:15 np0005603787 systemd-sysusers[313]: Creating group 'users' with GID 100.
Jan 31 04:20:15 np0005603787 systemd-sysusers[313]: Creating group 'dbus' with GID 81.
Jan 31 04:20:15 np0005603787 systemd-sysusers[313]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 31 04:20:15 np0005603787 systemd: Started Journal Service.
Jan 31 04:20:15 np0005603787 systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 31 04:20:15 np0005603787 systemd[1]: Starting Create Volatile Files and Directories...
Jan 31 04:20:15 np0005603787 systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 31 04:20:15 np0005603787 systemd[1]: Finished Setup Virtual Console.
Jan 31 04:20:15 np0005603787 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 31 04:20:15 np0005603787 systemd[1]: Starting dracut cmdline hook...
Jan 31 04:20:15 np0005603787 systemd[1]: Finished Create Volatile Files and Directories.
Jan 31 04:20:15 np0005603787 dracut-cmdline[328]: dracut-9 dracut-057-102.git20250818.el9
Jan 31 04:20:15 np0005603787 dracut-cmdline[328]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 31 04:20:15 np0005603787 systemd[1]: Finished dracut cmdline hook.
Jan 31 04:20:15 np0005603787 systemd[1]: Starting dracut pre-udev hook...
Jan 31 04:20:15 np0005603787 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 31 04:20:15 np0005603787 kernel: device-mapper: uevent: version 1.0.3
Jan 31 04:20:15 np0005603787 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 31 04:20:15 np0005603787 kernel: RPC: Registered named UNIX socket transport module.
Jan 31 04:20:15 np0005603787 kernel: RPC: Registered udp transport module.
Jan 31 04:20:15 np0005603787 kernel: RPC: Registered tcp transport module.
Jan 31 04:20:15 np0005603787 kernel: RPC: Registered tcp-with-tls transport module.
Jan 31 04:20:15 np0005603787 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 31 04:20:15 np0005603787 rpc.statd[445]: Version 2.5.4 starting
Jan 31 04:20:15 np0005603787 rpc.statd[445]: Initializing NSM state
Jan 31 04:20:15 np0005603787 rpc.idmapd[450]: Setting log level to 0
Jan 31 04:20:15 np0005603787 systemd[1]: Finished dracut pre-udev hook.
Jan 31 04:20:15 np0005603787 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 31 04:20:15 np0005603787 systemd-udevd[463]: Using default interface naming scheme 'rhel-9.0'.
Jan 31 04:20:15 np0005603787 systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 31 04:20:15 np0005603787 systemd[1]: Starting dracut pre-trigger hook...
Jan 31 04:20:15 np0005603787 systemd[1]: Finished dracut pre-trigger hook.
Jan 31 04:20:15 np0005603787 systemd[1]: Starting Coldplug All udev Devices...
Jan 31 04:20:15 np0005603787 systemd[1]: Created slice Slice /system/modprobe.
Jan 31 04:20:15 np0005603787 systemd[1]: Starting Load Kernel Module configfs...
Jan 31 04:20:15 np0005603787 systemd[1]: Finished Coldplug All udev Devices.
Jan 31 04:20:15 np0005603787 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 31 04:20:15 np0005603787 systemd[1]: Reached target Network.
Jan 31 04:20:15 np0005603787 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 31 04:20:15 np0005603787 systemd[1]: Starting dracut initqueue hook...
Jan 31 04:20:15 np0005603787 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 31 04:20:15 np0005603787 systemd[1]: Finished Load Kernel Module configfs.
Jan 31 04:20:15 np0005603787 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 31 04:20:15 np0005603787 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 31 04:20:15 np0005603787 kernel: vda: vda1
Jan 31 04:20:15 np0005603787 kernel: scsi host0: ata_piix
Jan 31 04:20:15 np0005603787 kernel: scsi host1: ata_piix
Jan 31 04:20:15 np0005603787 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 31 04:20:15 np0005603787 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 31 04:20:15 np0005603787 systemd[1]: Found device /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Jan 31 04:20:15 np0005603787 systemd[1]: Reached target Initrd Root Device.
Jan 31 04:20:15 np0005603787 kernel: ata1: found unknown device (class 0)
Jan 31 04:20:15 np0005603787 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 31 04:20:15 np0005603787 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 31 04:20:16 np0005603787 systemd-udevd[486]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 04:20:16 np0005603787 systemd[1]: Mounting Kernel Configuration File System...
Jan 31 04:20:16 np0005603787 systemd[1]: Mounted Kernel Configuration File System.
Jan 31 04:20:16 np0005603787 systemd[1]: Reached target System Initialization.
Jan 31 04:20:16 np0005603787 systemd[1]: Reached target Basic System.
Jan 31 04:20:16 np0005603787 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 31 04:20:16 np0005603787 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 31 04:20:16 np0005603787 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 31 04:20:16 np0005603787 systemd[1]: Finished dracut initqueue hook.
Jan 31 04:20:16 np0005603787 systemd[1]: Reached target Preparation for Remote File Systems.
Jan 31 04:20:16 np0005603787 systemd[1]: Reached target Remote Encrypted Volumes.
Jan 31 04:20:16 np0005603787 systemd[1]: Reached target Remote File Systems.
Jan 31 04:20:16 np0005603787 systemd[1]: Starting dracut pre-mount hook...
Jan 31 04:20:16 np0005603787 systemd[1]: Finished dracut pre-mount hook.
Jan 31 04:20:16 np0005603787 systemd[1]: Starting File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8...
Jan 31 04:20:16 np0005603787 systemd-fsck[556]: /usr/sbin/fsck.xfs: XFS file system.
Jan 31 04:20:16 np0005603787 systemd[1]: Finished File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Jan 31 04:20:16 np0005603787 systemd[1]: Mounting /sysroot...
Jan 31 04:20:16 np0005603787 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 31 04:20:16 np0005603787 kernel: XFS (vda1): Mounting V5 Filesystem 822f14ea-6e7e-41df-b0d8-fbe282d9ded8
Jan 31 04:20:16 np0005603787 kernel: XFS (vda1): Ending clean mount
Jan 31 04:20:16 np0005603787 systemd[1]: Mounted /sysroot.
Jan 31 04:20:16 np0005603787 systemd[1]: Reached target Initrd Root File System.
Jan 31 04:20:16 np0005603787 systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 31 04:20:16 np0005603787 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 31 04:20:16 np0005603787 systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 31 04:20:16 np0005603787 systemd[1]: Reached target Initrd File Systems.
Jan 31 04:20:16 np0005603787 systemd[1]: Reached target Initrd Default Target.
Jan 31 04:20:16 np0005603787 systemd[1]: Starting dracut mount hook...
Jan 31 04:20:16 np0005603787 systemd[1]: Finished dracut mount hook.
Jan 31 04:20:16 np0005603787 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 31 04:20:16 np0005603787 rpc.idmapd[450]: exiting on signal 15
Jan 31 04:20:16 np0005603787 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 31 04:20:17 np0005603787 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 31 04:20:17 np0005603787 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 31 04:20:17 np0005603787 systemd[1]: Stopped target Network.
Jan 31 04:20:17 np0005603787 systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 31 04:20:17 np0005603787 systemd[1]: Stopped target Timer Units.
Jan 31 04:20:17 np0005603787 systemd[1]: dbus.socket: Deactivated successfully.
Jan 31 04:20:17 np0005603787 systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 31 04:20:17 np0005603787 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 31 04:20:17 np0005603787 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 31 04:20:17 np0005603787 systemd[1]: Stopped target Initrd Default Target.
Jan 31 04:20:17 np0005603787 systemd[1]: Stopped target Basic System.
Jan 31 04:20:17 np0005603787 systemd[1]: Stopped target Initrd Root Device.
Jan 31 04:20:17 np0005603787 systemd[1]: Stopped target Initrd /usr File System.
Jan 31 04:20:17 np0005603787 systemd[1]: Stopped target Path Units.
Jan 31 04:20:17 np0005603787 systemd[1]: Stopped target Remote File Systems.
Jan 31 04:20:17 np0005603787 systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 31 04:20:17 np0005603787 systemd[1]: Stopped target Slice Units.
Jan 31 04:20:17 np0005603787 systemd[1]: Stopped target Socket Units.
Jan 31 04:20:17 np0005603787 systemd[1]: Stopped target System Initialization.
Jan 31 04:20:17 np0005603787 systemd[1]: Stopped target Local File Systems.
Jan 31 04:20:17 np0005603787 systemd[1]: Stopped target Swaps.
Jan 31 04:20:17 np0005603787 systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 31 04:20:17 np0005603787 systemd[1]: Stopped dracut mount hook.
Jan 31 04:20:17 np0005603787 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 31 04:20:17 np0005603787 systemd[1]: Stopped dracut pre-mount hook.
Jan 31 04:20:17 np0005603787 systemd[1]: Stopped target Local Encrypted Volumes.
Jan 31 04:20:17 np0005603787 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 31 04:20:17 np0005603787 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 31 04:20:17 np0005603787 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 31 04:20:17 np0005603787 systemd[1]: Stopped dracut initqueue hook.
Jan 31 04:20:17 np0005603787 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 31 04:20:17 np0005603787 systemd[1]: Stopped Apply Kernel Variables.
Jan 31 04:20:17 np0005603787 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 31 04:20:17 np0005603787 systemd[1]: Stopped Create Volatile Files and Directories.
Jan 31 04:20:17 np0005603787 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 31 04:20:17 np0005603787 systemd[1]: Stopped Coldplug All udev Devices.
Jan 31 04:20:17 np0005603787 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 31 04:20:17 np0005603787 systemd[1]: Stopped dracut pre-trigger hook.
Jan 31 04:20:17 np0005603787 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 31 04:20:17 np0005603787 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 31 04:20:17 np0005603787 systemd[1]: Stopped Setup Virtual Console.
Jan 31 04:20:17 np0005603787 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 31 04:20:17 np0005603787 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 31 04:20:17 np0005603787 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 31 04:20:17 np0005603787 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 31 04:20:17 np0005603787 systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 31 04:20:17 np0005603787 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 31 04:20:17 np0005603787 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 31 04:20:17 np0005603787 systemd[1]: Closed udev Control Socket.
Jan 31 04:20:17 np0005603787 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 31 04:20:17 np0005603787 systemd[1]: Closed udev Kernel Socket.
Jan 31 04:20:17 np0005603787 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 31 04:20:17 np0005603787 systemd[1]: Stopped dracut pre-udev hook.
Jan 31 04:20:17 np0005603787 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 31 04:20:17 np0005603787 systemd[1]: Stopped dracut cmdline hook.
Jan 31 04:20:17 np0005603787 systemd[1]: Starting Cleanup udev Database...
Jan 31 04:20:17 np0005603787 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 31 04:20:17 np0005603787 systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 31 04:20:17 np0005603787 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 31 04:20:17 np0005603787 systemd[1]: Stopped Create List of Static Device Nodes.
Jan 31 04:20:17 np0005603787 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 31 04:20:17 np0005603787 systemd[1]: Stopped Create System Users.
Jan 31 04:20:17 np0005603787 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 31 04:20:17 np0005603787 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 31 04:20:17 np0005603787 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 31 04:20:17 np0005603787 systemd[1]: Finished Cleanup udev Database.
Jan 31 04:20:17 np0005603787 systemd[1]: Reached target Switch Root.
Jan 31 04:20:17 np0005603787 systemd[1]: Starting Switch Root...
Jan 31 04:20:17 np0005603787 systemd[1]: Switching root.
Jan 31 04:20:17 np0005603787 systemd-journald[308]: Journal stopped
Jan 31 04:20:18 np0005603787 systemd-journald: Received SIGTERM from PID 1 (systemd).
Jan 31 04:20:18 np0005603787 kernel: audit: type=1404 audit(1769851217.468:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 31 04:20:18 np0005603787 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 04:20:18 np0005603787 kernel: SELinux:  policy capability open_perms=1
Jan 31 04:20:18 np0005603787 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 04:20:18 np0005603787 kernel: SELinux:  policy capability always_check_network=0
Jan 31 04:20:18 np0005603787 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 04:20:18 np0005603787 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 04:20:18 np0005603787 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 04:20:18 np0005603787 kernel: audit: type=1403 audit(1769851217.615:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 31 04:20:18 np0005603787 systemd: Successfully loaded SELinux policy in 152.598ms.
Jan 31 04:20:18 np0005603787 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 36.582ms.
Jan 31 04:20:18 np0005603787 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 31 04:20:18 np0005603787 systemd: Detected virtualization kvm.
Jan 31 04:20:18 np0005603787 systemd: Detected architecture x86-64.
Jan 31 04:20:18 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:20:18 np0005603787 systemd: initrd-switch-root.service: Deactivated successfully.
Jan 31 04:20:18 np0005603787 systemd: Stopped Switch Root.
Jan 31 04:20:18 np0005603787 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 31 04:20:18 np0005603787 systemd: Created slice Slice /system/getty.
Jan 31 04:20:18 np0005603787 systemd: Created slice Slice /system/serial-getty.
Jan 31 04:20:18 np0005603787 systemd: Created slice Slice /system/sshd-keygen.
Jan 31 04:20:18 np0005603787 systemd: Created slice User and Session Slice.
Jan 31 04:20:18 np0005603787 systemd: Started Dispatch Password Requests to Console Directory Watch.
Jan 31 04:20:18 np0005603787 systemd: Started Forward Password Requests to Wall Directory Watch.
Jan 31 04:20:18 np0005603787 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 31 04:20:18 np0005603787 systemd: Reached target Local Encrypted Volumes.
Jan 31 04:20:18 np0005603787 systemd: Stopped target Switch Root.
Jan 31 04:20:18 np0005603787 systemd: Stopped target Initrd File Systems.
Jan 31 04:20:18 np0005603787 systemd: Stopped target Initrd Root File System.
Jan 31 04:20:18 np0005603787 systemd: Reached target Local Integrity Protected Volumes.
Jan 31 04:20:18 np0005603787 systemd: Reached target Path Units.
Jan 31 04:20:18 np0005603787 systemd: Reached target rpc_pipefs.target.
Jan 31 04:20:18 np0005603787 systemd: Reached target Slice Units.
Jan 31 04:20:18 np0005603787 systemd: Reached target Swaps.
Jan 31 04:20:18 np0005603787 systemd: Reached target Local Verity Protected Volumes.
Jan 31 04:20:18 np0005603787 systemd: Listening on RPCbind Server Activation Socket.
Jan 31 04:20:18 np0005603787 systemd: Reached target RPC Port Mapper.
Jan 31 04:20:18 np0005603787 systemd: Listening on Process Core Dump Socket.
Jan 31 04:20:18 np0005603787 systemd: Listening on initctl Compatibility Named Pipe.
Jan 31 04:20:18 np0005603787 systemd: Listening on udev Control Socket.
Jan 31 04:20:18 np0005603787 systemd: Listening on udev Kernel Socket.
Jan 31 04:20:18 np0005603787 systemd: Mounting Huge Pages File System...
Jan 31 04:20:18 np0005603787 systemd: Mounting POSIX Message Queue File System...
Jan 31 04:20:18 np0005603787 systemd: Mounting Kernel Debug File System...
Jan 31 04:20:18 np0005603787 systemd: Mounting Kernel Trace File System...
Jan 31 04:20:18 np0005603787 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 31 04:20:18 np0005603787 systemd: Starting Create List of Static Device Nodes...
Jan 31 04:20:18 np0005603787 systemd: Starting Load Kernel Module configfs...
Jan 31 04:20:18 np0005603787 systemd: Starting Load Kernel Module drm...
Jan 31 04:20:18 np0005603787 systemd: Starting Load Kernel Module efi_pstore...
Jan 31 04:20:18 np0005603787 systemd: Starting Load Kernel Module fuse...
Jan 31 04:20:18 np0005603787 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 31 04:20:18 np0005603787 systemd: systemd-fsck-root.service: Deactivated successfully.
Jan 31 04:20:18 np0005603787 systemd: Stopped File System Check on Root Device.
Jan 31 04:20:18 np0005603787 systemd: Stopped Journal Service.
Jan 31 04:20:18 np0005603787 systemd: Starting Journal Service...
Jan 31 04:20:18 np0005603787 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 31 04:20:18 np0005603787 systemd: Starting Generate network units from Kernel command line...
Jan 31 04:20:18 np0005603787 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 31 04:20:18 np0005603787 systemd: Starting Remount Root and Kernel File Systems...
Jan 31 04:20:18 np0005603787 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 31 04:20:18 np0005603787 systemd: Starting Apply Kernel Variables...
Jan 31 04:20:18 np0005603787 kernel: fuse: init (API version 7.37)
Jan 31 04:20:18 np0005603787 systemd: Starting Coldplug All udev Devices...
Jan 31 04:20:18 np0005603787 systemd: Mounted Huge Pages File System.
Jan 31 04:20:18 np0005603787 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 31 04:20:18 np0005603787 systemd: Mounted POSIX Message Queue File System.
Jan 31 04:20:18 np0005603787 systemd: Mounted Kernel Debug File System.
Jan 31 04:20:18 np0005603787 systemd: Mounted Kernel Trace File System.
Jan 31 04:20:18 np0005603787 systemd: Finished Create List of Static Device Nodes.
Jan 31 04:20:18 np0005603787 systemd: modprobe@configfs.service: Deactivated successfully.
Jan 31 04:20:18 np0005603787 systemd: Finished Load Kernel Module configfs.
Jan 31 04:20:18 np0005603787 systemd: modprobe@efi_pstore.service: Deactivated successfully.
Jan 31 04:20:18 np0005603787 systemd: Finished Load Kernel Module efi_pstore.
Jan 31 04:20:18 np0005603787 systemd: modprobe@fuse.service: Deactivated successfully.
Jan 31 04:20:18 np0005603787 systemd: Finished Load Kernel Module fuse.
Jan 31 04:20:18 np0005603787 systemd: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 31 04:20:18 np0005603787 kernel: ACPI: bus type drm_connector registered
Jan 31 04:20:18 np0005603787 systemd: Finished Generate network units from Kernel command line.
Jan 31 04:20:18 np0005603787 systemd-journald[678]: Journal started
Jan 31 04:20:18 np0005603787 systemd-journald[678]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Jan 31 04:20:18 np0005603787 systemd[1]: Queued start job for default target Multi-User System.
Jan 31 04:20:18 np0005603787 systemd: Started Journal Service.
Jan 31 04:20:18 np0005603787 systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 31 04:20:18 np0005603787 systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 31 04:20:18 np0005603787 systemd[1]: Finished Load Kernel Module drm.
Jan 31 04:20:18 np0005603787 systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 31 04:20:18 np0005603787 systemd[1]: Finished Apply Kernel Variables.
Jan 31 04:20:18 np0005603787 systemd[1]: Mounting FUSE Control File System...
Jan 31 04:20:18 np0005603787 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 31 04:20:18 np0005603787 systemd[1]: Starting Rebuild Hardware Database...
Jan 31 04:20:18 np0005603787 systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 31 04:20:18 np0005603787 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 31 04:20:18 np0005603787 systemd[1]: Starting Load/Save OS Random Seed...
Jan 31 04:20:18 np0005603787 systemd[1]: Starting Create System Users...
Jan 31 04:20:18 np0005603787 systemd-journald[678]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Jan 31 04:20:18 np0005603787 systemd[1]: Finished Coldplug All udev Devices.
Jan 31 04:20:18 np0005603787 systemd-journald[678]: Received client request to flush runtime journal.
Jan 31 04:20:18 np0005603787 systemd[1]: Mounted FUSE Control File System.
Jan 31 04:20:18 np0005603787 systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 31 04:20:18 np0005603787 systemd[1]: Finished Load/Save OS Random Seed.
Jan 31 04:20:18 np0005603787 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 31 04:20:19 np0005603787 systemd[1]: Finished Create System Users.
Jan 31 04:20:19 np0005603787 systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 31 04:20:19 np0005603787 systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 31 04:20:19 np0005603787 systemd[1]: Reached target Preparation for Local File Systems.
Jan 31 04:20:19 np0005603787 systemd[1]: Reached target Local File Systems.
Jan 31 04:20:19 np0005603787 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 31 04:20:19 np0005603787 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 31 04:20:19 np0005603787 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 31 04:20:19 np0005603787 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 31 04:20:19 np0005603787 systemd[1]: Starting Automatic Boot Loader Update...
Jan 31 04:20:19 np0005603787 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 31 04:20:19 np0005603787 systemd[1]: Starting Create Volatile Files and Directories...
Jan 31 04:20:19 np0005603787 bootctl[697]: Couldn't find EFI system partition, skipping.
Jan 31 04:20:19 np0005603787 systemd[1]: Finished Automatic Boot Loader Update.
Jan 31 04:20:19 np0005603787 systemd[1]: Finished Create Volatile Files and Directories.
Jan 31 04:20:19 np0005603787 systemd[1]: Starting Security Auditing Service...
Jan 31 04:20:19 np0005603787 systemd[1]: Starting RPC Bind...
Jan 31 04:20:19 np0005603787 systemd[1]: Starting Rebuild Journal Catalog...
Jan 31 04:20:19 np0005603787 auditd[703]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 31 04:20:19 np0005603787 auditd[703]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 31 04:20:19 np0005603787 systemd[1]: Finished Rebuild Journal Catalog.
Jan 31 04:20:19 np0005603787 systemd[1]: Started RPC Bind.
Jan 31 04:20:19 np0005603787 augenrules[708]: /sbin/augenrules: No change
Jan 31 04:20:19 np0005603787 augenrules[723]: No rules
Jan 31 04:20:19 np0005603787 augenrules[723]: enabled 1
Jan 31 04:20:19 np0005603787 augenrules[723]: failure 1
Jan 31 04:20:19 np0005603787 augenrules[723]: pid 703
Jan 31 04:20:19 np0005603787 augenrules[723]: rate_limit 0
Jan 31 04:20:19 np0005603787 augenrules[723]: backlog_limit 8192
Jan 31 04:20:19 np0005603787 augenrules[723]: lost 0
Jan 31 04:20:19 np0005603787 augenrules[723]: backlog 0
Jan 31 04:20:19 np0005603787 augenrules[723]: backlog_wait_time 60000
Jan 31 04:20:19 np0005603787 augenrules[723]: backlog_wait_time_actual 0
Jan 31 04:20:19 np0005603787 augenrules[723]: enabled 1
Jan 31 04:20:19 np0005603787 augenrules[723]: failure 1
Jan 31 04:20:19 np0005603787 augenrules[723]: pid 703
Jan 31 04:20:19 np0005603787 augenrules[723]: rate_limit 0
Jan 31 04:20:19 np0005603787 augenrules[723]: backlog_limit 8192
Jan 31 04:20:19 np0005603787 augenrules[723]: lost 0
Jan 31 04:20:19 np0005603787 augenrules[723]: backlog 0
Jan 31 04:20:19 np0005603787 augenrules[723]: backlog_wait_time 60000
Jan 31 04:20:19 np0005603787 augenrules[723]: backlog_wait_time_actual 0
Jan 31 04:20:19 np0005603787 augenrules[723]: enabled 1
Jan 31 04:20:19 np0005603787 augenrules[723]: failure 1
Jan 31 04:20:19 np0005603787 augenrules[723]: pid 703
Jan 31 04:20:19 np0005603787 augenrules[723]: rate_limit 0
Jan 31 04:20:19 np0005603787 augenrules[723]: backlog_limit 8192
Jan 31 04:20:19 np0005603787 augenrules[723]: lost 0
Jan 31 04:20:19 np0005603787 augenrules[723]: backlog 0
Jan 31 04:20:19 np0005603787 augenrules[723]: backlog_wait_time 60000
Jan 31 04:20:19 np0005603787 augenrules[723]: backlog_wait_time_actual 0
Jan 31 04:20:19 np0005603787 systemd[1]: Started Security Auditing Service.
Jan 31 04:20:19 np0005603787 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 31 04:20:19 np0005603787 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 31 04:20:19 np0005603787 systemd[1]: Finished Rebuild Hardware Database.
Jan 31 04:20:19 np0005603787 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 31 04:20:19 np0005603787 systemd-udevd[731]: Using default interface naming scheme 'rhel-9.0'.
Jan 31 04:20:19 np0005603787 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 31 04:20:19 np0005603787 systemd[1]: Starting Update is Completed...
Jan 31 04:20:19 np0005603787 systemd[1]: Finished Update is Completed.
Jan 31 04:20:19 np0005603787 systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 31 04:20:19 np0005603787 systemd[1]: Reached target System Initialization.
Jan 31 04:20:19 np0005603787 systemd[1]: Started dnf makecache --timer.
Jan 31 04:20:19 np0005603787 systemd[1]: Started Daily rotation of log files.
Jan 31 04:20:19 np0005603787 systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 31 04:20:19 np0005603787 systemd[1]: Reached target Timer Units.
Jan 31 04:20:19 np0005603787 systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 31 04:20:19 np0005603787 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 31 04:20:19 np0005603787 systemd[1]: Reached target Socket Units.
Jan 31 04:20:19 np0005603787 systemd[1]: Starting D-Bus System Message Bus...
Jan 31 04:20:19 np0005603787 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 31 04:20:19 np0005603787 systemd[1]: Starting Load Kernel Module configfs...
Jan 31 04:20:19 np0005603787 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 31 04:20:19 np0005603787 systemd[1]: Finished Load Kernel Module configfs.
Jan 31 04:20:19 np0005603787 systemd-udevd[739]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 04:20:19 np0005603787 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 31 04:20:19 np0005603787 systemd[1]: Started D-Bus System Message Bus.
Jan 31 04:20:19 np0005603787 systemd[1]: Reached target Basic System.
Jan 31 04:20:19 np0005603787 dbus-broker-lau[768]: Ready
Jan 31 04:20:19 np0005603787 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 31 04:20:19 np0005603787 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 31 04:20:19 np0005603787 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 31 04:20:19 np0005603787 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 31 04:20:19 np0005603787 systemd[1]: Starting NTP client/server...
Jan 31 04:20:19 np0005603787 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 31 04:20:19 np0005603787 systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 31 04:20:19 np0005603787 systemd[1]: Starting IPv4 firewall with iptables...
Jan 31 04:20:19 np0005603787 systemd[1]: Started irqbalance daemon.
Jan 31 04:20:19 np0005603787 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 31 04:20:19 np0005603787 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 04:20:19 np0005603787 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 04:20:19 np0005603787 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 04:20:19 np0005603787 systemd[1]: Reached target sshd-keygen.target.
Jan 31 04:20:19 np0005603787 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 31 04:20:19 np0005603787 systemd[1]: Reached target User and Group Name Lookups.
Jan 31 04:20:19 np0005603787 systemd[1]: Starting User Login Management...
Jan 31 04:20:19 np0005603787 systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 31 04:20:19 np0005603787 chronyd[800]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 31 04:20:19 np0005603787 chronyd[800]: Loaded 0 symmetric keys
Jan 31 04:20:19 np0005603787 chronyd[800]: Using right/UTC timezone to obtain leap second data
Jan 31 04:20:19 np0005603787 chronyd[800]: Loaded seccomp filter (level 2)
Jan 31 04:20:19 np0005603787 systemd[1]: Started NTP client/server.
Jan 31 04:20:19 np0005603787 systemd-logind[786]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 31 04:20:19 np0005603787 systemd-logind[786]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 31 04:20:19 np0005603787 systemd-logind[786]: New seat seat0.
Jan 31 04:20:19 np0005603787 systemd[1]: Started User Login Management.
Jan 31 04:20:20 np0005603787 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 31 04:20:20 np0005603787 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 31 04:20:20 np0005603787 kernel: Console: switching to colour dummy device 80x25
Jan 31 04:20:20 np0005603787 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 31 04:20:20 np0005603787 kernel: [drm] features: -context_init
Jan 31 04:20:20 np0005603787 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 31 04:20:20 np0005603787 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 31 04:20:20 np0005603787 kernel: [drm] number of scanouts: 1
Jan 31 04:20:20 np0005603787 kernel: [drm] number of cap sets: 0
Jan 31 04:20:20 np0005603787 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 31 04:20:20 np0005603787 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 31 04:20:20 np0005603787 kernel: Console: switching to colour frame buffer device 128x48
Jan 31 04:20:20 np0005603787 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 31 04:20:20 np0005603787 kernel: kvm_amd: TSC scaling supported
Jan 31 04:20:20 np0005603787 kernel: kvm_amd: Nested Virtualization enabled
Jan 31 04:20:20 np0005603787 kernel: kvm_amd: Nested Paging enabled
Jan 31 04:20:20 np0005603787 kernel: kvm_amd: LBR virtualization supported
Jan 31 04:20:20 np0005603787 iptables.init[780]: iptables: Applying firewall rules: [  OK  ]
Jan 31 04:20:20 np0005603787 systemd[1]: Finished IPv4 firewall with iptables.
Jan 31 04:20:20 np0005603787 cloud-init[841]: Cloud-init v. 24.4-8.el9 running 'init-local' at Sat, 31 Jan 2026 09:20:20 +0000. Up 8.03 seconds.
Jan 31 04:20:20 np0005603787 systemd[1]: run-cloud\x2dinit-tmp-tmpcyqzjbu0.mount: Deactivated successfully.
Jan 31 04:20:20 np0005603787 systemd[1]: Starting Hostname Service...
Jan 31 04:20:20 np0005603787 systemd[1]: Started Hostname Service.
Jan 31 04:20:20 np0005603787 systemd-hostnamed[855]: Hostname set to <np0005603787.novalocal> (static)
Jan 31 04:20:21 np0005603787 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 31 04:20:21 np0005603787 systemd[1]: Reached target Preparation for Network.
Jan 31 04:20:21 np0005603787 systemd[1]: Starting Network Manager...
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.1934] NetworkManager (version 1.54.3-2.el9) is starting... (boot:3890a77f-f0f1-4a23-84f1-1930fb6c021a)
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.1938] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2097] manager[0x564e01155000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2142] hostname: hostname: using hostnamed
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2142] hostname: static hostname changed from (none) to "np0005603787.novalocal"
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2147] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2254] manager[0x564e01155000]: rfkill: Wi-Fi hardware radio set enabled
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2254] manager[0x564e01155000]: rfkill: WWAN hardware radio set enabled
Jan 31 04:20:21 np0005603787 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2350] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2352] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2353] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2354] manager: Networking is enabled by state file
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2356] settings: Loaded settings plugin: keyfile (internal)
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2403] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2426] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2443] dhcp: init: Using DHCP client 'internal'
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2449] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2461] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2472] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2486] device (lo): Activation: starting connection 'lo' (8dcb1e44-759d-480c-a0e9-6890091fb566)
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2493] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2495] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2516] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2519] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2521] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2523] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2524] device (eth0): carrier: link connected
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2526] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2529] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2534] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2536] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2537] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2538] manager: NetworkManager state is now CONNECTING
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2539] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2545] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2547] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 04:20:21 np0005603787 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2577] dhcp4 (eth0): state changed new lease, address=38.129.56.90
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2584] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 31 04:20:21 np0005603787 systemd[1]: Started Network Manager.
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2599] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 04:20:21 np0005603787 systemd[1]: Reached target Network.
Jan 31 04:20:21 np0005603787 systemd[1]: Starting Network Manager Wait Online...
Jan 31 04:20:21 np0005603787 systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 31 04:20:21 np0005603787 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2816] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2818] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2819] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2824] device (lo): Activation: successful, device activated.
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2829] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2832] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2834] device (eth0): Activation: successful, device activated.
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2839] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 31 04:20:21 np0005603787 NetworkManager[859]: <info>  [1769851221.2841] manager: startup complete
Jan 31 04:20:21 np0005603787 systemd[1]: Started GSSAPI Proxy Daemon.
Jan 31 04:20:21 np0005603787 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 31 04:20:21 np0005603787 systemd[1]: Reached target NFS client services.
Jan 31 04:20:21 np0005603787 systemd[1]: Reached target Preparation for Remote File Systems.
Jan 31 04:20:21 np0005603787 systemd[1]: Reached target Remote File Systems.
Jan 31 04:20:21 np0005603787 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 31 04:20:21 np0005603787 systemd[1]: Finished Network Manager Wait Online.
Jan 31 04:20:21 np0005603787 systemd[1]: Starting Cloud-init: Network Stage...
Jan 31 04:20:21 np0005603787 cloud-init[919]: Cloud-init v. 24.4-8.el9 running 'init' at Sat, 31 Jan 2026 09:20:21 +0000. Up 9.00 seconds.
Jan 31 04:20:21 np0005603787 cloud-init[919]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 31 04:20:21 np0005603787 cloud-init[919]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 31 04:20:21 np0005603787 cloud-init[919]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Jan 31 04:20:21 np0005603787 cloud-init[919]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 31 04:20:21 np0005603787 cloud-init[919]: ci-info: |  eth0  | True |         38.129.56.90         | 255.255.255.0 | global | fa:16:3e:69:c2:47 |
Jan 31 04:20:21 np0005603787 cloud-init[919]: ci-info: |  eth0  | True | fe80::f816:3eff:fe69:c247/64 |       .       |  link  | fa:16:3e:69:c2:47 |
Jan 31 04:20:21 np0005603787 cloud-init[919]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Jan 31 04:20:21 np0005603787 cloud-init[919]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Jan 31 04:20:21 np0005603787 cloud-init[919]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 31 04:20:21 np0005603787 cloud-init[919]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++++++
Jan 31 04:20:21 np0005603787 cloud-init[919]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Jan 31 04:20:21 np0005603787 cloud-init[919]: ci-info: | Route |   Destination   |   Gateway   |     Genmask     | Interface | Flags |
Jan 31 04:20:21 np0005603787 cloud-init[919]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Jan 31 04:20:21 np0005603787 cloud-init[919]: ci-info: |   0   |     0.0.0.0     | 38.129.56.1 |     0.0.0.0     |    eth0   |   UG  |
Jan 31 04:20:21 np0005603787 cloud-init[919]: ci-info: |   1   |   38.129.56.0   |   0.0.0.0   |  255.255.255.0  |    eth0   |   U   |
Jan 31 04:20:21 np0005603787 cloud-init[919]: ci-info: |   2   | 169.254.169.254 | 38.129.56.5 | 255.255.255.255 |    eth0   |  UGH  |
Jan 31 04:20:21 np0005603787 cloud-init[919]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Jan 31 04:20:21 np0005603787 cloud-init[919]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 31 04:20:21 np0005603787 cloud-init[919]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 31 04:20:21 np0005603787 cloud-init[919]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 31 04:20:21 np0005603787 cloud-init[919]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 31 04:20:21 np0005603787 cloud-init[919]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 31 04:20:21 np0005603787 cloud-init[919]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Jan 31 04:20:21 np0005603787 cloud-init[919]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 31 04:20:23 np0005603787 cloud-init[919]: Generating public/private rsa key pair.
Jan 31 04:20:23 np0005603787 cloud-init[919]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 31 04:20:23 np0005603787 cloud-init[919]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 31 04:20:23 np0005603787 cloud-init[919]: The key fingerprint is:
Jan 31 04:20:23 np0005603787 cloud-init[919]: SHA256:5V6nALKW7QHiDhW9U4Ofg1jg3y4f34P/ItQCzKxKT1E root@np0005603787.novalocal
Jan 31 04:20:23 np0005603787 cloud-init[919]: The key's randomart image is:
Jan 31 04:20:23 np0005603787 cloud-init[919]: +---[RSA 3072]----+
Jan 31 04:20:23 np0005603787 cloud-init[919]: |    oo .         |
Jan 31 04:20:23 np0005603787 cloud-init[919]: |   . .+ E        |
Jan 31 04:20:23 np0005603787 cloud-init[919]: |    +ooX.o.      |
Jan 31 04:20:23 np0005603787 cloud-init[919]: |   o.o=BO+       |
Jan 31 04:20:23 np0005603787 cloud-init[919]: |  . . =+Soo.. .  |
Jan 31 04:20:23 np0005603787 cloud-init[919]: |   o..oo ooo.o   |
Jan 31 04:20:23 np0005603787 cloud-init[919]: |   ..+. +..o.    |
Jan 31 04:20:23 np0005603787 cloud-init[919]: |    . .o oo.o    |
Jan 31 04:20:23 np0005603787 cloud-init[919]: |        . .oo+.  |
Jan 31 04:20:23 np0005603787 cloud-init[919]: +----[SHA256]-----+
Jan 31 04:20:23 np0005603787 cloud-init[919]: Generating public/private ecdsa key pair.
Jan 31 04:20:23 np0005603787 cloud-init[919]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 31 04:20:23 np0005603787 cloud-init[919]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 31 04:20:23 np0005603787 cloud-init[919]: The key fingerprint is:
Jan 31 04:20:23 np0005603787 cloud-init[919]: SHA256:aQZAvjzuo6JHStBcID4BZF0tcfuN/WvpKF2jcDhVPOI root@np0005603787.novalocal
Jan 31 04:20:23 np0005603787 cloud-init[919]: The key's randomart image is:
Jan 31 04:20:23 np0005603787 cloud-init[919]: +---[ECDSA 256]---+
Jan 31 04:20:23 np0005603787 cloud-init[919]: |=+oo+oo.    .    |
Jan 31 04:20:23 np0005603787 cloud-init[919]: |o..o.o...  . +   |
Jan 31 04:20:23 np0005603787 cloud-init[919]: | = .. o.  . o .  |
Jan 31 04:20:23 np0005603787 cloud-init[919]: |. +. . ...+E     |
Jan 31 04:20:23 np0005603787 cloud-init[919]: |.   +   Sooo     |
Jan 31 04:20:23 np0005603787 cloud-init[919]: | ... . o + ..o   |
Jan 31 04:20:23 np0005603787 cloud-init[919]: |.o  .     = o.o  |
Jan 31 04:20:23 np0005603787 cloud-init[919]: |o ...    . o.o.  |
Jan 31 04:20:23 np0005603787 cloud-init[919]: |oo....    ..oo   |
Jan 31 04:20:23 np0005603787 cloud-init[919]: +----[SHA256]-----+
Jan 31 04:20:23 np0005603787 cloud-init[919]: Generating public/private ed25519 key pair.
Jan 31 04:20:23 np0005603787 cloud-init[919]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 31 04:20:23 np0005603787 cloud-init[919]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 31 04:20:23 np0005603787 cloud-init[919]: The key fingerprint is:
Jan 31 04:20:23 np0005603787 cloud-init[919]: SHA256:OziFfyszAosZbGSDTVRNuXH5nkySDy5/ij4ibaHsZ48 root@np0005603787.novalocal
Jan 31 04:20:23 np0005603787 cloud-init[919]: The key's randomart image is:
Jan 31 04:20:23 np0005603787 cloud-init[919]: +--[ED25519 256]--+
Jan 31 04:20:23 np0005603787 cloud-init[919]: | ....o.. .       |
Jan 31 04:20:23 np0005603787 cloud-init[919]: |  .   + o        |
Jan 31 04:20:23 np0005603787 cloud-init[919]: | +     + o       |
Jan 31 04:20:23 np0005603787 cloud-init[919]: |. =   ..+ o      |
Jan 31 04:20:23 np0005603787 cloud-init[919]: | + .  ..SB .     |
Jan 31 04:20:23 np0005603787 cloud-init[919]: |  + o .+..=      |
Jan 31 04:20:23 np0005603787 cloud-init[919]: | o * +oo+ .      |
Jan 31 04:20:23 np0005603787 cloud-init[919]: |  * B.oo=o..     |
Jan 31 04:20:23 np0005603787 cloud-init[919]: | ..=E++o.*.      |
Jan 31 04:20:23 np0005603787 cloud-init[919]: +----[SHA256]-----+
Jan 31 04:20:23 np0005603787 systemd[1]: Finished Cloud-init: Network Stage.
Jan 31 04:20:23 np0005603787 systemd[1]: Reached target Cloud-config availability.
Jan 31 04:20:23 np0005603787 systemd[1]: Reached target Network is Online.
Jan 31 04:20:23 np0005603787 systemd[1]: Starting Cloud-init: Config Stage...
Jan 31 04:20:23 np0005603787 systemd[1]: Starting Crash recovery kernel arming...
Jan 31 04:20:23 np0005603787 systemd[1]: Starting Notify NFS peers of a restart...
Jan 31 04:20:23 np0005603787 systemd[1]: Starting System Logging Service...
Jan 31 04:20:23 np0005603787 sm-notify[1001]: Version 2.5.4 starting
Jan 31 04:20:23 np0005603787 systemd[1]: Starting OpenSSH server daemon...
Jan 31 04:20:23 np0005603787 systemd[1]: Starting Permit User Sessions...
Jan 31 04:20:23 np0005603787 systemd[1]: Started Notify NFS peers of a restart.
Jan 31 04:20:23 np0005603787 systemd[1]: Started OpenSSH server daemon.
Jan 31 04:20:23 np0005603787 systemd[1]: Finished Permit User Sessions.
Jan 31 04:20:23 np0005603787 systemd[1]: Started Command Scheduler.
Jan 31 04:20:23 np0005603787 systemd[1]: Started Getty on tty1.
Jan 31 04:20:23 np0005603787 systemd[1]: Started Serial Getty on ttyS0.
Jan 31 04:20:23 np0005603787 systemd[1]: Reached target Login Prompts.
Jan 31 04:20:23 np0005603787 rsyslogd[1002]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1002" x-info="https://www.rsyslog.com"] start
Jan 31 04:20:23 np0005603787 rsyslogd[1002]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 31 04:20:23 np0005603787 systemd[1]: Started System Logging Service.
Jan 31 04:20:23 np0005603787 systemd[1]: Reached target Multi-User System.
Jan 31 04:20:23 np0005603787 systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 31 04:20:23 np0005603787 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 31 04:20:23 np0005603787 systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 31 04:20:23 np0005603787 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 04:20:23 np0005603787 kdumpctl[1012]: kdump: No kdump initial ramdisk found.
Jan 31 04:20:23 np0005603787 kdumpctl[1012]: kdump: Rebuilding /boot/initramfs-5.14.0-665.el9.x86_64kdump.img
Jan 31 04:20:23 np0005603787 cloud-init[1148]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Sat, 31 Jan 2026 09:20:23 +0000. Up 10.82 seconds.
Jan 31 04:20:23 np0005603787 systemd[1]: Finished Cloud-init: Config Stage.
Jan 31 04:20:23 np0005603787 systemd[1]: Starting Cloud-init: Final Stage...
Jan 31 04:20:23 np0005603787 dracut[1281]: dracut-057-102.git20250818.el9
Jan 31 04:20:23 np0005603787 dracut[1283]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-665.el9.x86_64kdump.img 5.14.0-665.el9.x86_64
Jan 31 04:20:24 np0005603787 cloud-init[1351]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Sat, 31 Jan 2026 09:20:23 +0000. Up 11.34 seconds.
Jan 31 04:20:24 np0005603787 cloud-init[1353]: #############################################################
Jan 31 04:20:24 np0005603787 cloud-init[1354]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 31 04:20:24 np0005603787 cloud-init[1356]: 256 SHA256:aQZAvjzuo6JHStBcID4BZF0tcfuN/WvpKF2jcDhVPOI root@np0005603787.novalocal (ECDSA)
Jan 31 04:20:24 np0005603787 cloud-init[1358]: 256 SHA256:OziFfyszAosZbGSDTVRNuXH5nkySDy5/ij4ibaHsZ48 root@np0005603787.novalocal (ED25519)
Jan 31 04:20:24 np0005603787 cloud-init[1360]: 3072 SHA256:5V6nALKW7QHiDhW9U4Ofg1jg3y4f34P/ItQCzKxKT1E root@np0005603787.novalocal (RSA)
Jan 31 04:20:24 np0005603787 cloud-init[1363]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 31 04:20:24 np0005603787 cloud-init[1364]: #############################################################
Jan 31 04:20:24 np0005603787 cloud-init[1351]: Cloud-init v. 24.4-8.el9 finished at Sat, 31 Jan 2026 09:20:24 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 11.58 seconds
Jan 31 04:20:24 np0005603787 systemd[1]: Finished Cloud-init: Final Stage.
Jan 31 04:20:24 np0005603787 systemd[1]: Reached target Cloud-init target.
Jan 31 04:20:24 np0005603787 dracut[1283]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 31 04:20:24 np0005603787 dracut[1283]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 31 04:20:24 np0005603787 dracut[1283]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 31 04:20:24 np0005603787 dracut[1283]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 31 04:20:24 np0005603787 dracut[1283]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 31 04:20:24 np0005603787 dracut[1283]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 31 04:20:24 np0005603787 dracut[1283]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 31 04:20:24 np0005603787 dracut[1283]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 31 04:20:24 np0005603787 dracut[1283]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 31 04:20:24 np0005603787 dracut[1283]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 31 04:20:24 np0005603787 dracut[1283]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 31 04:20:24 np0005603787 dracut[1283]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 31 04:20:24 np0005603787 dracut[1283]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 31 04:20:24 np0005603787 dracut[1283]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 31 04:20:24 np0005603787 dracut[1283]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 31 04:20:24 np0005603787 dracut[1283]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 31 04:20:24 np0005603787 dracut[1283]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 31 04:20:24 np0005603787 dracut[1283]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 31 04:20:24 np0005603787 dracut[1283]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 31 04:20:24 np0005603787 dracut[1283]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 31 04:20:24 np0005603787 dracut[1283]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 31 04:20:24 np0005603787 dracut[1283]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 31 04:20:24 np0005603787 dracut[1283]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 31 04:20:24 np0005603787 dracut[1283]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 31 04:20:24 np0005603787 dracut[1283]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 31 04:20:24 np0005603787 dracut[1283]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 31 04:20:24 np0005603787 dracut[1283]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 31 04:20:24 np0005603787 dracut[1283]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 31 04:20:24 np0005603787 dracut[1283]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 31 04:20:25 np0005603787 dracut[1283]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 31 04:20:25 np0005603787 dracut[1283]: memstrack is not available
Jan 31 04:20:25 np0005603787 dracut[1283]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 31 04:20:25 np0005603787 dracut[1283]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 31 04:20:25 np0005603787 dracut[1283]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 31 04:20:25 np0005603787 dracut[1283]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 31 04:20:25 np0005603787 dracut[1283]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 31 04:20:25 np0005603787 dracut[1283]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 31 04:20:25 np0005603787 dracut[1283]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 31 04:20:25 np0005603787 dracut[1283]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 31 04:20:25 np0005603787 dracut[1283]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 31 04:20:25 np0005603787 dracut[1283]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 31 04:20:25 np0005603787 dracut[1283]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 31 04:20:25 np0005603787 dracut[1283]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 31 04:20:25 np0005603787 dracut[1283]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 31 04:20:25 np0005603787 dracut[1283]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 31 04:20:25 np0005603787 dracut[1283]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 31 04:20:25 np0005603787 dracut[1283]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 31 04:20:25 np0005603787 dracut[1283]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 31 04:20:25 np0005603787 dracut[1283]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 31 04:20:25 np0005603787 dracut[1283]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 31 04:20:25 np0005603787 dracut[1283]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 31 04:20:25 np0005603787 dracut[1283]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 31 04:20:25 np0005603787 dracut[1283]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 31 04:20:25 np0005603787 dracut[1283]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 31 04:20:25 np0005603787 dracut[1283]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 31 04:20:25 np0005603787 dracut[1283]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 31 04:20:25 np0005603787 dracut[1283]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 31 04:20:25 np0005603787 dracut[1283]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 31 04:20:25 np0005603787 dracut[1283]: memstrack is not available
Jan 31 04:20:25 np0005603787 dracut[1283]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 31 04:20:25 np0005603787 dracut[1283]: *** Including module: systemd ***
Jan 31 04:20:25 np0005603787 dracut[1283]: *** Including module: fips ***
Jan 31 04:20:25 np0005603787 dracut[1283]: *** Including module: systemd-initrd ***
Jan 31 04:20:25 np0005603787 dracut[1283]: *** Including module: i18n ***
Jan 31 04:20:25 np0005603787 dracut[1283]: *** Including module: drm ***
Jan 31 04:20:26 np0005603787 dracut[1283]: *** Including module: prefixdevname ***
Jan 31 04:20:26 np0005603787 dracut[1283]: *** Including module: kernel-modules ***
Jan 31 04:20:26 np0005603787 chronyd[800]: Selected source 167.160.187.179 (2.centos.pool.ntp.org)
Jan 31 04:20:26 np0005603787 chronyd[800]: System clock TAI offset set to 37 seconds
Jan 31 04:20:26 np0005603787 kernel: block vda: the capability attribute has been deprecated.
Jan 31 04:20:26 np0005603787 dracut[1283]: *** Including module: kernel-modules-extra ***
Jan 31 04:20:26 np0005603787 dracut[1283]: *** Including module: qemu ***
Jan 31 04:20:26 np0005603787 dracut[1283]: *** Including module: fstab-sys ***
Jan 31 04:20:26 np0005603787 dracut[1283]: *** Including module: rootfs-block ***
Jan 31 04:20:26 np0005603787 dracut[1283]: *** Including module: terminfo ***
Jan 31 04:20:26 np0005603787 dracut[1283]: *** Including module: udev-rules ***
Jan 31 04:20:26 np0005603787 dracut[1283]: Skipping udev rule: 91-permissions.rules
Jan 31 04:20:26 np0005603787 dracut[1283]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 31 04:20:27 np0005603787 dracut[1283]: *** Including module: virtiofs ***
Jan 31 04:20:27 np0005603787 dracut[1283]: *** Including module: dracut-systemd ***
Jan 31 04:20:27 np0005603787 dracut[1283]: *** Including module: usrmount ***
Jan 31 04:20:27 np0005603787 dracut[1283]: *** Including module: base ***
Jan 31 04:20:27 np0005603787 dracut[1283]: *** Including module: fs-lib ***
Jan 31 04:20:27 np0005603787 dracut[1283]: *** Including module: kdumpbase ***
Jan 31 04:20:27 np0005603787 dracut[1283]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 31 04:20:27 np0005603787 dracut[1283]:  microcode_ctl module: mangling fw_dir
Jan 31 04:20:27 np0005603787 dracut[1283]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 31 04:20:27 np0005603787 dracut[1283]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 31 04:20:27 np0005603787 dracut[1283]:    microcode_ctl: configuration "intel" is ignored
Jan 31 04:20:27 np0005603787 dracut[1283]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 31 04:20:27 np0005603787 dracut[1283]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 31 04:20:27 np0005603787 dracut[1283]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 31 04:20:27 np0005603787 dracut[1283]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 31 04:20:27 np0005603787 dracut[1283]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 31 04:20:27 np0005603787 dracut[1283]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 31 04:20:27 np0005603787 dracut[1283]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 31 04:20:27 np0005603787 dracut[1283]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 31 04:20:27 np0005603787 dracut[1283]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 31 04:20:27 np0005603787 dracut[1283]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 31 04:20:27 np0005603787 dracut[1283]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 31 04:20:27 np0005603787 dracut[1283]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 31 04:20:27 np0005603787 dracut[1283]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 31 04:20:27 np0005603787 dracut[1283]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 31 04:20:27 np0005603787 dracut[1283]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 31 04:20:27 np0005603787 dracut[1283]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 31 04:20:27 np0005603787 dracut[1283]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 31 04:20:27 np0005603787 dracut[1283]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 31 04:20:27 np0005603787 dracut[1283]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 31 04:20:27 np0005603787 dracut[1283]: *** Including module: openssl ***
Jan 31 04:20:27 np0005603787 dracut[1283]: *** Including module: shutdown ***
Jan 31 04:20:27 np0005603787 dracut[1283]: *** Including module: squash ***
Jan 31 04:20:27 np0005603787 dracut[1283]: *** Including modules done ***
Jan 31 04:20:27 np0005603787 dracut[1283]: *** Installing kernel module dependencies ***
Jan 31 04:20:28 np0005603787 dracut[1283]: *** Installing kernel module dependencies done ***
Jan 31 04:20:28 np0005603787 dracut[1283]: *** Resolving executable dependencies ***
Jan 31 04:20:29 np0005603787 dracut[1283]: *** Resolving executable dependencies done ***
Jan 31 04:20:29 np0005603787 dracut[1283]: *** Generating early-microcode cpio image ***
Jan 31 04:20:29 np0005603787 dracut[1283]: *** Store current command line parameters ***
Jan 31 04:20:29 np0005603787 dracut[1283]: Stored kernel commandline:
Jan 31 04:20:29 np0005603787 dracut[1283]: No dracut internal kernel commandline stored in the initramfs
Jan 31 04:20:29 np0005603787 dracut[1283]: *** Install squash loader ***
Jan 31 04:20:30 np0005603787 irqbalance[784]: Cannot change IRQ 25 affinity: Operation not permitted
Jan 31 04:20:30 np0005603787 irqbalance[784]: IRQ 25 affinity is now unmanaged
Jan 31 04:20:30 np0005603787 irqbalance[784]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 31 04:20:30 np0005603787 irqbalance[784]: IRQ 31 affinity is now unmanaged
Jan 31 04:20:30 np0005603787 irqbalance[784]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 31 04:20:30 np0005603787 irqbalance[784]: IRQ 28 affinity is now unmanaged
Jan 31 04:20:30 np0005603787 irqbalance[784]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 31 04:20:30 np0005603787 irqbalance[784]: IRQ 32 affinity is now unmanaged
Jan 31 04:20:30 np0005603787 irqbalance[784]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 31 04:20:30 np0005603787 irqbalance[784]: IRQ 30 affinity is now unmanaged
Jan 31 04:20:30 np0005603787 irqbalance[784]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 31 04:20:30 np0005603787 irqbalance[784]: IRQ 29 affinity is now unmanaged
Jan 31 04:20:30 np0005603787 dracut[1283]: *** Squashing the files inside the initramfs ***
Jan 31 04:20:31 np0005603787 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 04:20:31 np0005603787 dracut[1283]: *** Squashing the files inside the initramfs done ***
Jan 31 04:20:31 np0005603787 dracut[1283]: *** Creating image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' ***
Jan 31 04:20:31 np0005603787 dracut[1283]: *** Hardlinking files ***
Jan 31 04:20:31 np0005603787 dracut[1283]: *** Hardlinking files done ***
Jan 31 04:20:32 np0005603787 dracut[1283]: *** Creating initramfs image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' done ***
Jan 31 04:20:33 np0005603787 kdumpctl[1012]: kdump: kexec: loaded kdump kernel
Jan 31 04:20:33 np0005603787 kdumpctl[1012]: kdump: Starting kdump: [OK]
Jan 31 04:20:33 np0005603787 systemd[1]: Finished Crash recovery kernel arming.
Jan 31 04:20:33 np0005603787 systemd[1]: Startup finished in 2.188s (kernel) + 2.628s (initrd) + 15.702s (userspace) = 20.519s.
Jan 31 04:20:47 np0005603787 systemd[1]: Created slice User Slice of UID 1000.
Jan 31 04:20:47 np0005603787 systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 31 04:20:47 np0005603787 systemd-logind[786]: New session 1 of user zuul.
Jan 31 04:20:47 np0005603787 systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 31 04:20:47 np0005603787 systemd[1]: Starting User Manager for UID 1000...
Jan 31 04:20:47 np0005603787 systemd[4303]: Queued start job for default target Main User Target.
Jan 31 04:20:47 np0005603787 systemd[4303]: Created slice User Application Slice.
Jan 31 04:20:47 np0005603787 systemd[4303]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 04:20:47 np0005603787 systemd[4303]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 04:20:47 np0005603787 systemd[4303]: Reached target Paths.
Jan 31 04:20:47 np0005603787 systemd[4303]: Reached target Timers.
Jan 31 04:20:47 np0005603787 systemd[4303]: Starting D-Bus User Message Bus Socket...
Jan 31 04:20:47 np0005603787 systemd[4303]: Starting Create User's Volatile Files and Directories...
Jan 31 04:20:47 np0005603787 systemd[4303]: Listening on D-Bus User Message Bus Socket.
Jan 31 04:20:47 np0005603787 systemd[4303]: Finished Create User's Volatile Files and Directories.
Jan 31 04:20:47 np0005603787 systemd[4303]: Reached target Sockets.
Jan 31 04:20:47 np0005603787 systemd[4303]: Reached target Basic System.
Jan 31 04:20:47 np0005603787 systemd[4303]: Reached target Main User Target.
Jan 31 04:20:47 np0005603787 systemd[4303]: Startup finished in 114ms.
Jan 31 04:20:47 np0005603787 systemd[1]: Started User Manager for UID 1000.
Jan 31 04:20:47 np0005603787 systemd[1]: Started Session 1 of User zuul.
Jan 31 04:20:47 np0005603787 python3[4385]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 04:20:51 np0005603787 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 04:20:52 np0005603787 python3[4415]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 04:20:58 np0005603787 python3[4473]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 04:20:59 np0005603787 python3[4513]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 31 04:21:01 np0005603787 python3[4539]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCi9LeKB1GN9BpNSYNRd0TrcjXWazKn1ypQywEyFeW+fBQOuMW0OsPoarTkExIr2uOn+frShZS50eqVBv3gmZOOOiuECxWZuYBDKNuF2oojV/fkVAZXs/9IYwT3BzVmAOw/6GOrjX1B18Vs4Vjvjql9qiAJ/BQZzjfZeJ9cRC+mBID9lkDqtW7j8vQTHKA1CHhlvr8PNNjpimU0YJ+SS5XQ7OmV/8b4eSQ9l5+Q4HaYZ2pMWYMLcoqXMVpR1Pgk8rbyDxCX8ULJDtayH7J9CvuPwYce9/DI21ZYOltKXjsFg1gWqLbq9YhdfeOLb/9Ptj8s0kPFdJFjwLLgcLcLyLnvucY6FEvH5PCeEzfxSvoDgPGWbLU1ItPoeDguuMaG57N93WgsBEdS1jC9UfEVdYSFDCAzB/a5DlTZv+PiqheII5aHYqmS8NTRSBLAhPxbei7d0ymC2k0+zrWZjq7sGXl31KNAUqk2FvDq50QzhR/Cj4t0u5mofp//dgBGDXIZpDc= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 04:21:01 np0005603787 python3[4563]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:21:02 np0005603787 python3[4662]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 04:21:02 np0005603787 python3[4733]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769851261.9129808-207-253516528004525/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=126e36086a7e4637b59e09f7340cfa24_id_rsa follow=False checksum=d7aeea7515011ecea0ac2cf4f935e601b3ff51cf backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:21:03 np0005603787 python3[4856]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 04:21:03 np0005603787 python3[4927]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769851262.928347-240-253368318983618/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=126e36086a7e4637b59e09f7340cfa24_id_rsa.pub follow=False checksum=274fcf8c472b513fc175f53091063e1c7ca76fda backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:21:04 np0005603787 python3[4975]: ansible-ping Invoked with data=pong
Jan 31 04:21:06 np0005603787 python3[4999]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 04:21:08 np0005603787 python3[5057]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 31 04:21:09 np0005603787 python3[5089]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:21:09 np0005603787 python3[5113]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:21:09 np0005603787 python3[5137]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:21:10 np0005603787 python3[5161]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:21:10 np0005603787 python3[5185]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:21:10 np0005603787 python3[5209]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:21:12 np0005603787 python3[5235]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:21:13 np0005603787 python3[5313]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 04:21:13 np0005603787 python3[5386]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769851273.075416-21-115002343938871/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:21:14 np0005603787 python3[5434]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 04:21:14 np0005603787 python3[5458]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 04:21:15 np0005603787 python3[5482]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 04:21:15 np0005603787 python3[5506]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 04:21:15 np0005603787 python3[5530]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 04:21:15 np0005603787 python3[5554]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 04:21:16 np0005603787 python3[5578]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 04:21:16 np0005603787 python3[5602]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 04:21:16 np0005603787 python3[5626]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 04:21:17 np0005603787 python3[5650]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 04:21:17 np0005603787 python3[5674]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 04:21:17 np0005603787 python3[5698]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 04:21:17 np0005603787 python3[5722]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 04:21:18 np0005603787 python3[5746]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 04:21:18 np0005603787 python3[5770]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 04:21:18 np0005603787 python3[5794]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 04:21:18 np0005603787 python3[5818]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 04:21:19 np0005603787 python3[5842]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 04:21:19 np0005603787 python3[5866]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 04:21:19 np0005603787 python3[5890]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 04:21:19 np0005603787 python3[5914]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 04:21:20 np0005603787 python3[5938]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 04:21:20 np0005603787 python3[5962]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 04:21:20 np0005603787 irqbalance[784]: Cannot change IRQ 27 affinity: Operation not permitted
Jan 31 04:21:20 np0005603787 irqbalance[784]: IRQ 27 affinity is now unmanaged
Jan 31 04:21:20 np0005603787 python3[5986]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 04:21:21 np0005603787 python3[6010]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 04:21:21 np0005603787 python3[6034]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 04:21:25 np0005603787 python3[6060]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 31 04:21:25 np0005603787 systemd[1]: Starting Time & Date Service...
Jan 31 04:21:25 np0005603787 systemd[1]: Started Time & Date Service.
Jan 31 04:21:25 np0005603787 systemd-timedated[6062]: Changed time zone to 'UTC' (UTC).
Jan 31 04:21:25 np0005603787 python3[6091]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:21:26 np0005603787 python3[6167]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 04:21:26 np0005603787 python3[6238]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1769851285.9153268-153-150876187063393/source _original_basename=tmp0cghdvqx follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:21:26 np0005603787 python3[6338]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 04:21:27 np0005603787 python3[6409]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769851286.7527199-183-66384930161143/source _original_basename=tmp1po6w8wb follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:21:28 np0005603787 python3[6511]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 04:21:28 np0005603787 python3[6584]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769851287.863477-231-233295776032570/source _original_basename=tmpjuyzca0q follow=False checksum=eaecea7361aa5ec897067a0de4e232b47eea7734 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:21:29 np0005603787 python3[6632]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:21:29 np0005603787 python3[6658]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:21:29 np0005603787 python3[6738]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 04:21:29 np0005603787 python3[6811]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1769851289.435924-273-247910156273333/source _original_basename=tmpy6xgx26w follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:21:30 np0005603787 python3[6862]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-2af9-6196-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:21:31 np0005603787 python3[6890]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163efc-24cc-2af9-6196-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 31 04:21:32 np0005603787 python3[6918]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:21:32 np0005603787 chronyd[800]: Selected source 138.197.164.54 (2.centos.pool.ntp.org)
Jan 31 04:21:52 np0005603787 python3[6944]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:21:55 np0005603787 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 31 04:22:28 np0005603787 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 31 04:22:28 np0005603787 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 31 04:22:28 np0005603787 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 31 04:22:28 np0005603787 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 31 04:22:28 np0005603787 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 31 04:22:28 np0005603787 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 31 04:22:28 np0005603787 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 31 04:22:28 np0005603787 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 31 04:22:28 np0005603787 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 31 04:22:28 np0005603787 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 31 04:22:28 np0005603787 NetworkManager[859]: <info>  [1769851348.2474] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 31 04:22:28 np0005603787 systemd-udevd[6947]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 04:22:28 np0005603787 NetworkManager[859]: <info>  [1769851348.2609] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 04:22:28 np0005603787 NetworkManager[859]: <info>  [1769851348.2636] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 31 04:22:28 np0005603787 NetworkManager[859]: <info>  [1769851348.2639] device (eth1): carrier: link connected
Jan 31 04:22:28 np0005603787 NetworkManager[859]: <info>  [1769851348.2641] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 31 04:22:28 np0005603787 NetworkManager[859]: <info>  [1769851348.2646] policy: auto-activating connection 'Wired connection 1' (d613f60b-0bbf-30ef-976f-d1c33beaaf01)
Jan 31 04:22:28 np0005603787 NetworkManager[859]: <info>  [1769851348.2651] device (eth1): Activation: starting connection 'Wired connection 1' (d613f60b-0bbf-30ef-976f-d1c33beaaf01)
Jan 31 04:22:28 np0005603787 NetworkManager[859]: <info>  [1769851348.2652] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 04:22:28 np0005603787 NetworkManager[859]: <info>  [1769851348.2655] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 04:22:28 np0005603787 NetworkManager[859]: <info>  [1769851348.2659] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 04:22:28 np0005603787 NetworkManager[859]: <info>  [1769851348.2663] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 31 04:22:29 np0005603787 python3[6974]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-c4ab-b4fd-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:22:39 np0005603787 python3[7054]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 04:22:39 np0005603787 python3[7127]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769851358.8448315-102-51786950785299/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=fe94b22c4527d643f5268ae36d0cbb8cb6627be2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:22:40 np0005603787 python3[7177]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 04:22:40 np0005603787 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 31 04:22:40 np0005603787 systemd[1]: Stopped Network Manager Wait Online.
Jan 31 04:22:40 np0005603787 systemd[1]: Stopping Network Manager Wait Online...
Jan 31 04:22:40 np0005603787 systemd[1]: Stopping Network Manager...
Jan 31 04:22:40 np0005603787 NetworkManager[859]: <info>  [1769851360.2502] caught SIGTERM, shutting down normally.
Jan 31 04:22:40 np0005603787 NetworkManager[859]: <info>  [1769851360.2514] dhcp4 (eth0): canceled DHCP transaction
Jan 31 04:22:40 np0005603787 NetworkManager[859]: <info>  [1769851360.2514] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 04:22:40 np0005603787 NetworkManager[859]: <info>  [1769851360.2514] dhcp4 (eth0): state changed no lease
Jan 31 04:22:40 np0005603787 NetworkManager[859]: <info>  [1769851360.2518] manager: NetworkManager state is now CONNECTING
Jan 31 04:22:40 np0005603787 NetworkManager[859]: <info>  [1769851360.2611] dhcp4 (eth1): canceled DHCP transaction
Jan 31 04:22:40 np0005603787 NetworkManager[859]: <info>  [1769851360.2612] dhcp4 (eth1): state changed no lease
Jan 31 04:22:40 np0005603787 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 04:22:40 np0005603787 NetworkManager[859]: <info>  [1769851360.2657] exiting (success)
Jan 31 04:22:40 np0005603787 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 04:22:40 np0005603787 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 31 04:22:40 np0005603787 systemd[1]: Stopped Network Manager.
Jan 31 04:22:40 np0005603787 systemd[1]: NetworkManager.service: Consumed 1.324s CPU time, 10.3M memory peak.
Jan 31 04:22:40 np0005603787 systemd[1]: Starting Network Manager...
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.3187] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:3890a77f-f0f1-4a23-84f1-1930fb6c021a)
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.3191] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.3234] manager[0x5605905c3000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 31 04:22:40 np0005603787 systemd[1]: Starting Hostname Service...
Jan 31 04:22:40 np0005603787 systemd[1]: Started Hostname Service.
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.3833] hostname: hostname: using hostnamed
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.3836] hostname: static hostname changed from (none) to "np0005603787.novalocal"
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.3841] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.3844] manager[0x5605905c3000]: rfkill: Wi-Fi hardware radio set enabled
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.3845] manager[0x5605905c3000]: rfkill: WWAN hardware radio set enabled
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.3867] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.3867] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.3868] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.3868] manager: Networking is enabled by state file
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.3870] settings: Loaded settings plugin: keyfile (internal)
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.3873] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.3894] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.3901] dhcp: init: Using DHCP client 'internal'
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.3904] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.3909] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.3913] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.3920] device (lo): Activation: starting connection 'lo' (8dcb1e44-759d-480c-a0e9-6890091fb566)
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.3932] device (eth0): carrier: link connected
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.3936] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.3943] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.3943] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.3951] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.3961] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.3966] device (eth1): carrier: link connected
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.3970] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.3975] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (d613f60b-0bbf-30ef-976f-d1c33beaaf01) (indicated)
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.3976] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.3980] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.3987] device (eth1): Activation: starting connection 'Wired connection 1' (d613f60b-0bbf-30ef-976f-d1c33beaaf01)
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.3993] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 31 04:22:40 np0005603787 systemd[1]: Started Network Manager.
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.3998] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.3999] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.4001] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.4002] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.4004] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.4005] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.4008] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.4010] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.4017] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.4020] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.4030] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.4033] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.4043] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.4048] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.4052] device (lo): Activation: successful, device activated.
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.4105] dhcp4 (eth0): state changed new lease, address=38.129.56.90
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.4111] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.4185] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 04:22:40 np0005603787 systemd[1]: Starting Network Manager Wait Online...
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.4240] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.4241] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.4245] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.4247] device (eth0): Activation: successful, device activated.
Jan 31 04:22:40 np0005603787 NetworkManager[7189]: <info>  [1769851360.4252] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 31 04:22:40 np0005603787 python3[7261]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-c4ab-b4fd-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:22:50 np0005603787 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 04:23:10 np0005603787 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 04:23:25 np0005603787 NetworkManager[7189]: <info>  [1769851405.6202] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 04:23:25 np0005603787 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 04:23:25 np0005603787 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 04:23:25 np0005603787 NetworkManager[7189]: <info>  [1769851405.6463] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 04:23:25 np0005603787 NetworkManager[7189]: <info>  [1769851405.6466] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 04:23:25 np0005603787 NetworkManager[7189]: <info>  [1769851405.6476] device (eth1): Activation: successful, device activated.
Jan 31 04:23:25 np0005603787 NetworkManager[7189]: <info>  [1769851405.6487] manager: startup complete
Jan 31 04:23:25 np0005603787 NetworkManager[7189]: <info>  [1769851405.6492] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 31 04:23:25 np0005603787 NetworkManager[7189]: <warn>  [1769851405.6504] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 31 04:23:25 np0005603787 NetworkManager[7189]: <info>  [1769851405.6514] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 31 04:23:25 np0005603787 systemd[1]: Finished Network Manager Wait Online.
Jan 31 04:23:25 np0005603787 NetworkManager[7189]: <info>  [1769851405.6701] dhcp4 (eth1): canceled DHCP transaction
Jan 31 04:23:25 np0005603787 NetworkManager[7189]: <info>  [1769851405.6701] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 31 04:23:25 np0005603787 NetworkManager[7189]: <info>  [1769851405.6702] dhcp4 (eth1): state changed no lease
Jan 31 04:23:25 np0005603787 NetworkManager[7189]: <info>  [1769851405.6724] policy: auto-activating connection 'ci-private-network' (d3b12458-23c1-57b0-aa6a-91480d18c487)
Jan 31 04:23:25 np0005603787 NetworkManager[7189]: <info>  [1769851405.6729] device (eth1): Activation: starting connection 'ci-private-network' (d3b12458-23c1-57b0-aa6a-91480d18c487)
Jan 31 04:23:25 np0005603787 NetworkManager[7189]: <info>  [1769851405.6731] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 04:23:25 np0005603787 NetworkManager[7189]: <info>  [1769851405.6736] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 04:23:25 np0005603787 NetworkManager[7189]: <info>  [1769851405.6744] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 04:23:25 np0005603787 NetworkManager[7189]: <info>  [1769851405.6753] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 04:23:25 np0005603787 NetworkManager[7189]: <info>  [1769851405.6804] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 04:23:25 np0005603787 NetworkManager[7189]: <info>  [1769851405.6807] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 04:23:25 np0005603787 NetworkManager[7189]: <info>  [1769851405.6818] device (eth1): Activation: successful, device activated.
Jan 31 04:23:25 np0005603787 systemd[4303]: Starting Mark boot as successful...
Jan 31 04:23:25 np0005603787 systemd[4303]: Finished Mark boot as successful.
Jan 31 04:23:35 np0005603787 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 04:23:40 np0005603787 systemd-logind[786]: Session 1 logged out. Waiting for processes to exit.
Jan 31 04:23:42 np0005603787 systemd-logind[786]: New session 3 of user zuul.
Jan 31 04:23:42 np0005603787 systemd[1]: Started Session 3 of User zuul.
Jan 31 04:23:42 np0005603787 python3[7371]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 04:23:43 np0005603787 python3[7444]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769851422.4727645-267-55510490728343/source _original_basename=tmpfjcfcphy follow=False checksum=c2f21292d9622d716dc81234682da17445614acf backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:23:45 np0005603787 systemd[1]: session-3.scope: Deactivated successfully.
Jan 31 04:23:45 np0005603787 systemd-logind[786]: Session 3 logged out. Waiting for processes to exit.
Jan 31 04:23:45 np0005603787 systemd-logind[786]: Removed session 3.
Jan 31 04:26:25 np0005603787 systemd[4303]: Created slice User Background Tasks Slice.
Jan 31 04:26:25 np0005603787 systemd[4303]: Starting Cleanup of User's Temporary Files and Directories...
Jan 31 04:26:25 np0005603787 systemd[4303]: Finished Cleanup of User's Temporary Files and Directories.
Jan 31 04:31:37 np0005603787 systemd-logind[786]: New session 4 of user zuul.
Jan 31 04:31:37 np0005603787 systemd[1]: Started Session 4 of User zuul.
Jan 31 04:31:37 np0005603787 python3[7505]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163efc-24cc-db4b-fb20-00000000216b-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:31:37 np0005603787 python3[7534]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:31:37 np0005603787 python3[7560]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:31:38 np0005603787 python3[7586]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:31:38 np0005603787 python3[7612]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:31:39 np0005603787 python3[7638]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:31:39 np0005603787 python3[7716]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 04:31:39 np0005603787 python3[7789]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769851899.2567935-499-98588469433844/source _original_basename=tmp8rbwqgsj follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:31:40 np0005603787 python3[7839]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 04:31:40 np0005603787 systemd[1]: Reloading.
Jan 31 04:31:40 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:31:42 np0005603787 python3[7896]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 31 04:31:42 np0005603787 python3[7922]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:31:43 np0005603787 python3[7950]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:31:43 np0005603787 python3[7978]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:31:43 np0005603787 python3[8006]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:31:44 np0005603787 python3[8033]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163efc-24cc-db4b-fb20-000000002172-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:31:45 np0005603787 python3[8063]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 04:31:46 np0005603787 systemd[1]: session-4.scope: Deactivated successfully.
Jan 31 04:31:46 np0005603787 systemd[1]: session-4.scope: Consumed 3.495s CPU time.
Jan 31 04:31:46 np0005603787 systemd-logind[786]: Session 4 logged out. Waiting for processes to exit.
Jan 31 04:31:46 np0005603787 systemd-logind[786]: Removed session 4.
Jan 31 04:31:48 np0005603787 systemd-logind[786]: New session 5 of user zuul.
Jan 31 04:31:48 np0005603787 systemd[1]: Started Session 5 of User zuul.
Jan 31 04:31:48 np0005603787 python3[8098]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 04:31:55 np0005603787 setsebool[8140]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 31 04:31:55 np0005603787 setsebool[8140]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 31 04:32:06 np0005603787 kernel: SELinux:  Converting 385 SID table entries...
Jan 31 04:32:06 np0005603787 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 04:32:06 np0005603787 kernel: SELinux:  policy capability open_perms=1
Jan 31 04:32:06 np0005603787 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 04:32:06 np0005603787 kernel: SELinux:  policy capability always_check_network=0
Jan 31 04:32:06 np0005603787 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 04:32:06 np0005603787 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 04:32:06 np0005603787 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 04:32:16 np0005603787 kernel: SELinux:  Converting 388 SID table entries...
Jan 31 04:32:16 np0005603787 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 04:32:16 np0005603787 kernel: SELinux:  policy capability open_perms=1
Jan 31 04:32:16 np0005603787 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 04:32:16 np0005603787 kernel: SELinux:  policy capability always_check_network=0
Jan 31 04:32:16 np0005603787 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 04:32:16 np0005603787 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 04:32:16 np0005603787 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 04:32:34 np0005603787 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 31 04:32:34 np0005603787 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 04:32:34 np0005603787 systemd[1]: Starting man-db-cache-update.service...
Jan 31 04:32:34 np0005603787 systemd[1]: Reloading.
Jan 31 04:32:34 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:32:34 np0005603787 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 04:33:09 np0005603787 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 04:33:09 np0005603787 systemd[1]: Finished man-db-cache-update.service.
Jan 31 04:33:09 np0005603787 systemd[1]: man-db-cache-update.service: Consumed 39.012s CPU time.
Jan 31 04:33:09 np0005603787 systemd[1]: run-rb6ab0d3fc2f847d2aef9e65a9d39cb3c.service: Deactivated successfully.
Jan 31 04:33:11 np0005603787 python3[29556]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163efc-24cc-16a9-3b5b-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:33:12 np0005603787 kernel: evm: overlay not supported
Jan 31 04:33:12 np0005603787 systemd[4303]: Starting D-Bus User Message Bus...
Jan 31 04:33:12 np0005603787 dbus-broker-launch[29614]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 31 04:33:12 np0005603787 dbus-broker-launch[29614]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 31 04:33:12 np0005603787 systemd[4303]: Started D-Bus User Message Bus.
Jan 31 04:33:12 np0005603787 dbus-broker-lau[29614]: Ready
Jan 31 04:33:12 np0005603787 systemd[4303]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 31 04:33:12 np0005603787 systemd[4303]: Created slice Slice /user.
Jan 31 04:33:12 np0005603787 systemd[4303]: podman-29595.scope: unit configures an IP firewall, but not running as root.
Jan 31 04:33:12 np0005603787 systemd[4303]: (This warning is only shown for the first unit using IP firewalling.)
Jan 31 04:33:12 np0005603787 systemd[4303]: Started podman-29595.scope.
Jan 31 04:33:12 np0005603787 systemd[4303]: Started podman-pause-9ac9eff0.scope.
Jan 31 04:33:15 np0005603787 python3[29642]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.147:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.147:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:33:15 np0005603787 python3[29642]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 31 04:33:16 np0005603787 systemd[1]: session-5.scope: Deactivated successfully.
Jan 31 04:33:16 np0005603787 systemd[1]: session-5.scope: Consumed 41.742s CPU time.
Jan 31 04:33:16 np0005603787 systemd-logind[786]: Session 5 logged out. Waiting for processes to exit.
Jan 31 04:33:16 np0005603787 systemd-logind[786]: Removed session 5.
Jan 31 04:33:42 np0005603787 systemd-logind[786]: New session 6 of user zuul.
Jan 31 04:33:42 np0005603787 systemd[1]: Started Session 6 of User zuul.
Jan 31 04:33:42 np0005603787 python3[29681]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCXXH0Scn2m3UH9gFMTohv7a3RXjAtrQ2xgiysB0xwCV5rx+0ApuYu5fvEFe/cJILsXMoF88Tse/wUnT/2AYezk= zuul@np0005603786.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 04:33:43 np0005603787 python3[29707]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCXXH0Scn2m3UH9gFMTohv7a3RXjAtrQ2xgiysB0xwCV5rx+0ApuYu5fvEFe/cJILsXMoF88Tse/wUnT/2AYezk= zuul@np0005603786.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 04:33:43 np0005603787 python3[29733]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005603787.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 31 04:33:44 np0005603787 python3[29767]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCXXH0Scn2m3UH9gFMTohv7a3RXjAtrQ2xgiysB0xwCV5rx+0ApuYu5fvEFe/cJILsXMoF88Tse/wUnT/2AYezk= zuul@np0005603786.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 04:33:45 np0005603787 python3[29845]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 04:33:45 np0005603787 python3[29918]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769852024.9666154-135-149465670982325/source _original_basename=tmp4801txzh follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:33:46 np0005603787 python3[29968]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Jan 31 04:33:46 np0005603787 systemd[1]: Starting Hostname Service...
Jan 31 04:33:46 np0005603787 systemd[1]: Started Hostname Service.
Jan 31 04:33:46 np0005603787 systemd-hostnamed[29972]: Changed pretty hostname to 'compute-0'
Jan 31 04:33:46 np0005603787 systemd-hostnamed[29972]: Hostname set to <compute-0> (static)
Jan 31 04:33:46 np0005603787 NetworkManager[7189]: <info>  [1769852026.4887] hostname: static hostname changed from "np0005603787.novalocal" to "compute-0"
Jan 31 04:33:46 np0005603787 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 04:33:46 np0005603787 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 04:33:46 np0005603787 systemd[1]: session-6.scope: Deactivated successfully.
Jan 31 04:33:46 np0005603787 systemd[1]: session-6.scope: Consumed 1.927s CPU time.
Jan 31 04:33:46 np0005603787 systemd-logind[786]: Session 6 logged out. Waiting for processes to exit.
Jan 31 04:33:46 np0005603787 systemd-logind[786]: Removed session 6.
Jan 31 04:33:56 np0005603787 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 04:34:16 np0005603787 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 04:35:25 np0005603787 systemd[1]: Starting Cleanup of Temporary Directories...
Jan 31 04:35:25 np0005603787 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 31 04:35:25 np0005603787 systemd[1]: Finished Cleanup of Temporary Directories.
Jan 31 04:35:25 np0005603787 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 31 04:37:56 np0005603787 systemd-logind[786]: New session 7 of user zuul.
Jan 31 04:37:56 np0005603787 systemd[1]: Started Session 7 of User zuul.
Jan 31 04:37:57 np0005603787 python3[30068]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 04:37:58 np0005603787 python3[30184]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 04:37:59 np0005603787 python3[30257]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769852278.3392975-33605-259209677902036/source mode=0755 _original_basename=delorean.repo follow=False checksum=cc4ab4695da8ec58c451521a3dd2f41014af145d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:37:59 np0005603787 python3[30283]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 04:37:59 np0005603787 python3[30356]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769852278.3392975-33605-259209677902036/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:37:59 np0005603787 python3[30382]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 04:38:00 np0005603787 python3[30455]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769852278.3392975-33605-259209677902036/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:38:00 np0005603787 python3[30481]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 04:38:01 np0005603787 python3[30554]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769852278.3392975-33605-259209677902036/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:38:01 np0005603787 python3[30580]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 04:38:01 np0005603787 python3[30653]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769852278.3392975-33605-259209677902036/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:38:01 np0005603787 python3[30679]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 04:38:02 np0005603787 python3[30752]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769852278.3392975-33605-259209677902036/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:38:02 np0005603787 python3[30778]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 04:38:02 np0005603787 python3[30851]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769852278.3392975-33605-259209677902036/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=362a603578148d54e8cd25942b88d7f471cc677a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:38:18 np0005603787 python3[30909]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:43:17 np0005603787 systemd-logind[786]: Session 7 logged out. Waiting for processes to exit.
Jan 31 04:43:17 np0005603787 systemd[1]: session-7.scope: Deactivated successfully.
Jan 31 04:43:17 np0005603787 systemd[1]: session-7.scope: Consumed 4.054s CPU time.
Jan 31 04:43:17 np0005603787 systemd-logind[786]: Removed session 7.
Jan 31 04:49:16 np0005603787 systemd-logind[786]: New session 8 of user zuul.
Jan 31 04:49:16 np0005603787 systemd[1]: Started Session 8 of User zuul.
Jan 31 04:49:17 np0005603787 python3.9[31068]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 04:49:18 np0005603787 python3.9[31249]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:49:27 np0005603787 systemd[1]: session-8.scope: Deactivated successfully.
Jan 31 04:49:27 np0005603787 systemd[1]: session-8.scope: Consumed 7.778s CPU time.
Jan 31 04:49:27 np0005603787 systemd-logind[786]: Session 8 logged out. Waiting for processes to exit.
Jan 31 04:49:27 np0005603787 systemd-logind[786]: Removed session 8.
Jan 31 04:49:44 np0005603787 systemd-logind[786]: New session 9 of user zuul.
Jan 31 04:49:44 np0005603787 systemd[1]: Started Session 9 of User zuul.
Jan 31 04:49:44 np0005603787 python3.9[31459]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 31 04:49:46 np0005603787 python3.9[31633]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 04:49:46 np0005603787 python3.9[31785]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:49:47 np0005603787 python3.9[31938]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 04:49:48 np0005603787 python3.9[32090]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:49:49 np0005603787 python3.9[32242]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 04:49:49 np0005603787 python3.9[32367]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769852988.8878193-68-41879892305088/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:49:50 np0005603787 python3.9[32519]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 04:49:51 np0005603787 python3.9[32675]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 04:49:52 np0005603787 python3.9[32827]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 04:49:52 np0005603787 python3.9[32977]: ansible-ansible.builtin.service_facts Invoked
Jan 31 04:49:55 np0005603787 python3.9[33230]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:49:56 np0005603787 python3.9[33380]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 04:49:57 np0005603787 python3.9[33534]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 04:49:58 np0005603787 python3.9[33692]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 04:49:59 np0005603787 python3.9[33776]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 04:50:48 np0005603787 systemd[1]: Reloading.
Jan 31 04:50:48 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:50:48 np0005603787 systemd[1]: Starting dnf makecache...
Jan 31 04:50:48 np0005603787 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 31 04:50:48 np0005603787 dnf[33985]: Failed determining last makecache time.
Jan 31 04:50:48 np0005603787 dnf[33985]: delorean-openstack-barbican-42b4c41831408a8e323 141 kB/s | 3.0 kB     00:00
Jan 31 04:50:48 np0005603787 dnf[33985]: delorean-python-glean-642fffe0203a8ffcc2443db52 138 kB/s | 3.0 kB     00:00
Jan 31 04:50:48 np0005603787 dnf[33985]: delorean-openstack-cinder-1c00d6490d88e436f26ef 144 kB/s | 3.0 kB     00:00
Jan 31 04:50:48 np0005603787 dnf[33985]: delorean-python-stevedore-c4acc5639fd2329372142 147 kB/s | 3.0 kB     00:00
Jan 31 04:50:48 np0005603787 dnf[33985]: delorean-python-cloudkitty-tests-tempest-783703 120 kB/s | 3.0 kB     00:00
Jan 31 04:50:48 np0005603787 dnf[33985]: delorean-diskimage-builder-61b717cc45660834fe9a 141 kB/s | 3.0 kB     00:00
Jan 31 04:50:48 np0005603787 dnf[33985]: delorean-openstack-nova-eaa65f0b85123a4ee343246 176 kB/s | 3.0 kB     00:00
Jan 31 04:50:48 np0005603787 dnf[33985]: delorean-python-designate-tests-tempest-347fdbc 190 kB/s | 3.0 kB     00:00
Jan 31 04:50:48 np0005603787 dnf[33985]: delorean-openstack-glance-1fd12c29b339f30fe823e 166 kB/s | 3.0 kB     00:00
Jan 31 04:50:48 np0005603787 dnf[33985]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 180 kB/s | 3.0 kB     00:00
Jan 31 04:50:48 np0005603787 dnf[33985]: delorean-openstack-manila-d783d10e75495b73866db 176 kB/s | 3.0 kB     00:00
Jan 31 04:50:48 np0005603787 dnf[33985]: delorean-openstack-neutron-95cadbd379667c8520c8 156 kB/s | 3.0 kB     00:00
Jan 31 04:50:48 np0005603787 dnf[33985]: delorean-openstack-octavia-5975097dd4b021385178 153 kB/s | 3.0 kB     00:00
Jan 31 04:50:48 np0005603787 dnf[33985]: delorean-openstack-watcher-c014f81a8647287f6dcc 149 kB/s | 3.0 kB     00:00
Jan 31 04:50:48 np0005603787 dnf[33985]: delorean-python-tcib-78032d201b02cee27e8e644c61 177 kB/s | 3.0 kB     00:00
Jan 31 04:50:48 np0005603787 dnf[33985]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 146 kB/s | 3.0 kB     00:00
Jan 31 04:50:48 np0005603787 systemd[1]: Reloading.
Jan 31 04:50:48 np0005603787 dnf[33985]: delorean-openstack-swift-dc98a8463506ac520c469a 179 kB/s | 3.0 kB     00:00
Jan 31 04:50:48 np0005603787 dnf[33985]: delorean-python-tempestconf-8515371b7cceebd4282 158 kB/s | 3.0 kB     00:00
Jan 31 04:50:48 np0005603787 dnf[33985]: delorean-openstack-heat-ui-013accbfd179753bc3f0 151 kB/s | 3.0 kB     00:00
Jan 31 04:50:48 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:50:48 np0005603787 dnf[33985]: CentOS Stream 9 - BaseOS                         54 kB/s | 6.1 kB     00:00
Jan 31 04:50:48 np0005603787 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 31 04:50:49 np0005603787 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 31 04:50:49 np0005603787 systemd[1]: Reloading.
Jan 31 04:50:49 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:50:49 np0005603787 dnf[33985]: CentOS Stream 9 - AppStream                      61 kB/s | 6.5 kB     00:00
Jan 31 04:50:49 np0005603787 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 31 04:50:49 np0005603787 dnf[33985]: CentOS Stream 9 - CRB                            58 kB/s | 6.0 kB     00:00
Jan 31 04:50:49 np0005603787 dnf[33985]: CentOS Stream 9 - Extras packages                72 kB/s | 7.3 kB     00:00
Jan 31 04:50:49 np0005603787 dnf[33985]: dlrn-antelope-testing                           118 kB/s | 3.0 kB     00:00
Jan 31 04:50:49 np0005603787 dnf[33985]: dlrn-antelope-build-deps                        148 kB/s | 3.0 kB     00:00
Jan 31 04:50:49 np0005603787 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Jan 31 04:50:49 np0005603787 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Jan 31 04:50:49 np0005603787 dnf[33985]: centos9-rabbitmq                                114 kB/s | 3.0 kB     00:00
Jan 31 04:50:49 np0005603787 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Jan 31 04:50:49 np0005603787 dnf[33985]: centos9-storage                                 129 kB/s | 3.0 kB     00:00
Jan 31 04:50:49 np0005603787 dnf[33985]: centos9-opstools                                128 kB/s | 3.0 kB     00:00
Jan 31 04:50:49 np0005603787 dnf[33985]: NFV SIG OpenvSwitch                             142 kB/s | 3.0 kB     00:00
Jan 31 04:50:49 np0005603787 dnf[33985]: repo-setup-centos-appstream                     212 kB/s | 4.4 kB     00:00
Jan 31 04:50:49 np0005603787 dnf[33985]: repo-setup-centos-baseos                        165 kB/s | 3.9 kB     00:00
Jan 31 04:50:49 np0005603787 dnf[33985]: repo-setup-centos-highavailability              159 kB/s | 3.9 kB     00:00
Jan 31 04:50:49 np0005603787 dnf[33985]: repo-setup-centos-powertools                    178 kB/s | 4.3 kB     00:00
Jan 31 04:50:50 np0005603787 dnf[33985]: Extra Packages for Enterprise Linux 9 - x86_64  104 kB/s |  31 kB     00:00
Jan 31 04:50:50 np0005603787 dnf[33985]: Metadata cache created.
Jan 31 04:50:50 np0005603787 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 31 04:50:50 np0005603787 systemd[1]: Finished dnf makecache.
Jan 31 04:50:50 np0005603787 systemd[1]: dnf-makecache.service: Consumed 1.878s CPU time.
Jan 31 04:51:58 np0005603787 kernel: SELinux:  Converting 2727 SID table entries...
Jan 31 04:51:58 np0005603787 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 04:51:58 np0005603787 kernel: SELinux:  policy capability open_perms=1
Jan 31 04:51:58 np0005603787 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 04:51:58 np0005603787 kernel: SELinux:  policy capability always_check_network=0
Jan 31 04:51:58 np0005603787 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 04:51:58 np0005603787 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 04:51:58 np0005603787 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 04:51:58 np0005603787 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 31 04:51:58 np0005603787 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 04:51:58 np0005603787 systemd[1]: Starting man-db-cache-update.service...
Jan 31 04:51:58 np0005603787 systemd[1]: Reloading.
Jan 31 04:51:58 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:51:58 np0005603787 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 04:51:59 np0005603787 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 04:51:59 np0005603787 systemd[1]: Finished man-db-cache-update.service.
Jan 31 04:51:59 np0005603787 systemd[1]: run-r339f90a85a8546c9a3e398b9ecf984a4.service: Deactivated successfully.
Jan 31 04:52:00 np0005603787 python3.9[35331]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:52:02 np0005603787 python3.9[35612]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 31 04:52:03 np0005603787 python3.9[35764]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 31 04:52:05 np0005603787 python3.9[35917]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:52:06 np0005603787 python3.9[36069]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 31 04:52:07 np0005603787 python3.9[36222]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 04:52:08 np0005603787 python3.9[36374]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 04:52:12 np0005603787 python3.9[36497]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769853127.8823934-231-190304511603855/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ab7d5e470c6e190b74372f300d98064504b36836 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:52:13 np0005603787 python3.9[36650]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 04:52:14 np0005603787 python3.9[36802]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:52:14 np0005603787 python3.9[36955]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:52:15 np0005603787 python3.9[37107]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 31 04:52:15 np0005603787 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 04:52:15 np0005603787 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 04:52:16 np0005603787 python3.9[37261]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 04:52:17 np0005603787 python3.9[37419]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 04:52:19 np0005603787 python3.9[37579]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 31 04:52:19 np0005603787 python3.9[37732]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 04:52:20 np0005603787 python3.9[37890]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 31 04:52:21 np0005603787 python3.9[38042]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 04:52:24 np0005603787 python3.9[38195]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 04:52:24 np0005603787 python3.9[38347]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 04:52:25 np0005603787 python3.9[38470]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769853144.3321357-350-244295012042786/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 04:52:26 np0005603787 python3.9[38622]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 04:52:26 np0005603787 systemd[1]: Starting Load Kernel Modules...
Jan 31 04:52:26 np0005603787 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 31 04:52:26 np0005603787 kernel: Bridge firewalling registered
Jan 31 04:52:26 np0005603787 systemd-modules-load[38626]: Inserted module 'br_netfilter'
Jan 31 04:52:26 np0005603787 systemd[1]: Finished Load Kernel Modules.
Jan 31 04:52:27 np0005603787 python3.9[38781]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 04:52:27 np0005603787 python3.9[38904]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769853146.671602-373-280604467804936/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 04:52:28 np0005603787 python3.9[39056]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 04:52:31 np0005603787 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Jan 31 04:52:31 np0005603787 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Jan 31 04:52:31 np0005603787 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 04:52:31 np0005603787 systemd[1]: Starting man-db-cache-update.service...
Jan 31 04:52:31 np0005603787 systemd[1]: Reloading.
Jan 31 04:52:31 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:52:31 np0005603787 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 04:52:33 np0005603787 python3.9[40770]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 04:52:33 np0005603787 python3.9[41882]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 31 04:52:34 np0005603787 python3.9[42784]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 04:52:34 np0005603787 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 04:52:34 np0005603787 systemd[1]: Finished man-db-cache-update.service.
Jan 31 04:52:34 np0005603787 systemd[1]: man-db-cache-update.service: Consumed 3.683s CPU time.
Jan 31 04:52:34 np0005603787 systemd[1]: run-rbd39a6a2df534ad684450c85ff388596.service: Deactivated successfully.
Jan 31 04:52:34 np0005603787 python3.9[43261]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:52:35 np0005603787 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 31 04:52:35 np0005603787 systemd[1]: Starting Authorization Manager...
Jan 31 04:52:35 np0005603787 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 31 04:52:35 np0005603787 polkitd[43478]: Started polkitd version 0.117
Jan 31 04:52:35 np0005603787 systemd[1]: Started Authorization Manager.
Jan 31 04:52:36 np0005603787 python3.9[43648]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 04:52:36 np0005603787 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 31 04:52:36 np0005603787 systemd[1]: tuned.service: Deactivated successfully.
Jan 31 04:52:36 np0005603787 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 31 04:52:36 np0005603787 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 31 04:52:36 np0005603787 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 31 04:52:37 np0005603787 python3.9[43810]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 31 04:52:39 np0005603787 python3.9[43962]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 04:52:39 np0005603787 systemd[1]: Reloading.
Jan 31 04:52:39 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:52:39 np0005603787 python3.9[44151]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 04:52:39 np0005603787 systemd[1]: Reloading.
Jan 31 04:52:40 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:52:40 np0005603787 python3.9[44340]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:52:41 np0005603787 python3.9[44493]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:52:41 np0005603787 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 31 04:52:42 np0005603787 python3.9[44646]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:52:44 np0005603787 python3.9[44808]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:52:44 np0005603787 python3.9[44961]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 04:52:44 np0005603787 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 31 04:52:44 np0005603787 systemd[1]: Stopped Apply Kernel Variables.
Jan 31 04:52:44 np0005603787 systemd[1]: Stopping Apply Kernel Variables...
Jan 31 04:52:44 np0005603787 systemd[1]: Starting Apply Kernel Variables...
Jan 31 04:52:44 np0005603787 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 31 04:52:44 np0005603787 systemd[1]: Finished Apply Kernel Variables.
Jan 31 04:52:45 np0005603787 systemd-logind[786]: Session 9 logged out. Waiting for processes to exit.
Jan 31 04:52:45 np0005603787 systemd[1]: session-9.scope: Deactivated successfully.
Jan 31 04:52:45 np0005603787 systemd[1]: session-9.scope: Consumed 2min 10.130s CPU time.
Jan 31 04:52:45 np0005603787 systemd-logind[786]: Removed session 9.
Jan 31 04:52:50 np0005603787 irqbalance[784]: Cannot change IRQ 26 affinity: Operation not permitted
Jan 31 04:52:50 np0005603787 irqbalance[784]: IRQ 26 affinity is now unmanaged
Jan 31 04:52:52 np0005603787 systemd-logind[786]: New session 10 of user zuul.
Jan 31 04:52:52 np0005603787 systemd[1]: Started Session 10 of User zuul.
Jan 31 04:52:53 np0005603787 python3.9[45144]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 04:52:54 np0005603787 python3.9[45300]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 31 04:52:55 np0005603787 python3.9[45453]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 04:52:56 np0005603787 python3.9[45611]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 04:52:57 np0005603787 python3.9[45771]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 04:52:58 np0005603787 python3.9[45855]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 04:53:01 np0005603787 python3.9[46019]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 04:53:13 np0005603787 kernel: SELinux:  Converting 2739 SID table entries...
Jan 31 04:53:13 np0005603787 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 04:53:13 np0005603787 kernel: SELinux:  policy capability open_perms=1
Jan 31 04:53:13 np0005603787 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 04:53:13 np0005603787 kernel: SELinux:  policy capability always_check_network=0
Jan 31 04:53:13 np0005603787 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 04:53:13 np0005603787 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 04:53:13 np0005603787 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 04:53:14 np0005603787 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 31 04:53:14 np0005603787 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 31 04:53:15 np0005603787 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 04:53:15 np0005603787 systemd[1]: Starting man-db-cache-update.service...
Jan 31 04:53:15 np0005603787 systemd[1]: Reloading.
Jan 31 04:53:15 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:53:15 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 04:53:15 np0005603787 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 04:53:16 np0005603787 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 04:53:16 np0005603787 systemd[1]: Finished man-db-cache-update.service.
Jan 31 04:53:16 np0005603787 systemd[1]: run-r77995f1ac5484c87af88d7d805959cf9.service: Deactivated successfully.
Jan 31 04:53:17 np0005603787 python3.9[47117]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 04:53:17 np0005603787 systemd[1]: Reloading.
Jan 31 04:53:17 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 04:53:17 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:53:17 np0005603787 systemd[1]: Starting Open vSwitch Database Unit...
Jan 31 04:53:17 np0005603787 chown[47159]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 31 04:53:17 np0005603787 ovs-ctl[47164]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 31 04:53:17 np0005603787 ovs-ctl[47164]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 31 04:53:17 np0005603787 ovs-ctl[47164]: Starting ovsdb-server [  OK  ]
Jan 31 04:53:17 np0005603787 ovs-vsctl[47213]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 31 04:53:17 np0005603787 ovs-vsctl[47233]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"ef41023c-ae05-4c9a-b1cb-d6bd86d05fb4\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 31 04:53:17 np0005603787 ovs-ctl[47164]: Configuring Open vSwitch system IDs [  OK  ]
Jan 31 04:53:17 np0005603787 ovs-ctl[47164]: Enabling remote OVSDB managers [  OK  ]
Jan 31 04:53:17 np0005603787 systemd[1]: Started Open vSwitch Database Unit.
Jan 31 04:53:17 np0005603787 ovs-vsctl[47239]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 31 04:53:17 np0005603787 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 31 04:53:17 np0005603787 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 31 04:53:17 np0005603787 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 31 04:53:17 np0005603787 kernel: openvswitch: Open vSwitch switching datapath
Jan 31 04:53:17 np0005603787 ovs-ctl[47284]: Inserting openvswitch module [  OK  ]
Jan 31 04:53:17 np0005603787 ovs-ctl[47253]: Starting ovs-vswitchd [  OK  ]
Jan 31 04:53:17 np0005603787 ovs-vsctl[47303]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 31 04:53:17 np0005603787 ovs-ctl[47253]: Enabling remote OVSDB managers [  OK  ]
Jan 31 04:53:17 np0005603787 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 31 04:53:17 np0005603787 systemd[1]: Starting Open vSwitch...
Jan 31 04:53:17 np0005603787 systemd[1]: Finished Open vSwitch.
Jan 31 04:53:18 np0005603787 python3.9[47454]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 04:53:19 np0005603787 python3.9[47606]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 31 04:53:20 np0005603787 kernel: SELinux:  Converting 2753 SID table entries...
Jan 31 04:53:20 np0005603787 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 04:53:20 np0005603787 kernel: SELinux:  policy capability open_perms=1
Jan 31 04:53:20 np0005603787 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 04:53:20 np0005603787 kernel: SELinux:  policy capability always_check_network=0
Jan 31 04:53:20 np0005603787 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 04:53:20 np0005603787 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 04:53:20 np0005603787 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 04:53:21 np0005603787 python3.9[47761]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 04:53:22 np0005603787 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 31 04:53:22 np0005603787 python3.9[47919]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 04:53:24 np0005603787 python3.9[48072]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:53:26 np0005603787 python3.9[48359]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 31 04:53:26 np0005603787 python3.9[48509]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 04:53:27 np0005603787 python3.9[48663]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 04:53:29 np0005603787 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 04:53:29 np0005603787 systemd[1]: Starting man-db-cache-update.service...
Jan 31 04:53:29 np0005603787 systemd[1]: Reloading.
Jan 31 04:53:29 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:53:29 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 04:53:29 np0005603787 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 04:53:29 np0005603787 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 04:53:29 np0005603787 systemd[1]: Finished man-db-cache-update.service.
Jan 31 04:53:29 np0005603787 systemd[1]: run-re2c09d5cb88a4d9dbc9d3b8cab42e7ec.service: Deactivated successfully.
Jan 31 04:53:30 np0005603787 python3.9[48980]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 04:53:30 np0005603787 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 31 04:53:30 np0005603787 systemd[1]: Stopped Network Manager Wait Online.
Jan 31 04:53:30 np0005603787 systemd[1]: Stopping Network Manager Wait Online...
Jan 31 04:53:30 np0005603787 systemd[1]: Stopping Network Manager...
Jan 31 04:53:30 np0005603787 NetworkManager[7189]: <info>  [1769853210.5575] caught SIGTERM, shutting down normally.
Jan 31 04:53:30 np0005603787 NetworkManager[7189]: <info>  [1769853210.5590] dhcp4 (eth0): canceled DHCP transaction
Jan 31 04:53:30 np0005603787 NetworkManager[7189]: <info>  [1769853210.5590] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 04:53:30 np0005603787 NetworkManager[7189]: <info>  [1769853210.5590] dhcp4 (eth0): state changed no lease
Jan 31 04:53:30 np0005603787 NetworkManager[7189]: <info>  [1769853210.5592] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 04:53:30 np0005603787 NetworkManager[7189]: <info>  [1769853210.5652] exiting (success)
Jan 31 04:53:30 np0005603787 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 04:53:30 np0005603787 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 04:53:30 np0005603787 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 31 04:53:30 np0005603787 systemd[1]: Stopped Network Manager.
Jan 31 04:53:30 np0005603787 systemd[1]: NetworkManager.service: Consumed 17.272s CPU time, 4.1M memory peak, read 0B from disk, written 28.0K to disk.
Jan 31 04:53:30 np0005603787 systemd[1]: Starting Network Manager...
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.6291] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:3890a77f-f0f1-4a23-84f1-1930fb6c021a)
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.6294] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.6339] manager[0x562d27b97000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 31 04:53:30 np0005603787 systemd[1]: Starting Hostname Service...
Jan 31 04:53:30 np0005603787 systemd[1]: Started Hostname Service.
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7031] hostname: hostname: using hostnamed
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7032] hostname: static hostname changed from (none) to "compute-0"
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7038] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7043] manager[0x562d27b97000]: rfkill: Wi-Fi hardware radio set enabled
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7043] manager[0x562d27b97000]: rfkill: WWAN hardware radio set enabled
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7065] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7072] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7073] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7073] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7074] manager: Networking is enabled by state file
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7076] settings: Loaded settings plugin: keyfile (internal)
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7079] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7099] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7106] dhcp: init: Using DHCP client 'internal'
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7108] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7112] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7116] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7121] device (lo): Activation: starting connection 'lo' (8dcb1e44-759d-480c-a0e9-6890091fb566)
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7126] device (eth0): carrier: link connected
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7128] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7131] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7131] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7135] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7139] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7142] device (eth1): carrier: link connected
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7145] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7149] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (d3b12458-23c1-57b0-aa6a-91480d18c487) (indicated)
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7149] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7153] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7158] device (eth1): Activation: starting connection 'ci-private-network' (d3b12458-23c1-57b0-aa6a-91480d18c487)
Jan 31 04:53:30 np0005603787 systemd[1]: Started Network Manager.
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7181] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7190] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7193] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7196] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7198] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7201] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7204] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7207] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7210] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7219] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7222] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7246] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7258] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7267] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7269] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7273] device (lo): Activation: successful, device activated.
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7279] dhcp4 (eth0): state changed new lease, address=38.129.56.90
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7285] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 31 04:53:30 np0005603787 systemd[1]: Starting Network Manager Wait Online...
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7349] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7356] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7361] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7365] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7367] device (eth1): Activation: successful, device activated.
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7375] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7376] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7379] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7380] device (eth0): Activation: successful, device activated.
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7384] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 31 04:53:30 np0005603787 NetworkManager[48992]: <info>  [1769853210.7387] manager: startup complete
Jan 31 04:53:30 np0005603787 systemd[1]: Finished Network Manager Wait Online.
Jan 31 04:53:31 np0005603787 python3.9[49206]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 04:53:36 np0005603787 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 04:53:36 np0005603787 systemd[1]: Starting man-db-cache-update.service...
Jan 31 04:53:36 np0005603787 systemd[1]: Reloading.
Jan 31 04:53:36 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 04:53:36 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:53:36 np0005603787 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 04:53:39 np0005603787 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 04:53:39 np0005603787 systemd[1]: Finished man-db-cache-update.service.
Jan 31 04:53:39 np0005603787 systemd[1]: run-r8797e38534ae416680ce784d4aaa999f.service: Deactivated successfully.
Jan 31 04:53:39 np0005603787 python3.9[49665]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 04:53:40 np0005603787 python3.9[49817]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:53:40 np0005603787 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 04:53:41 np0005603787 python3.9[49971]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:53:42 np0005603787 python3.9[50123]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:53:42 np0005603787 python3.9[50275]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:53:43 np0005603787 python3.9[50427]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:53:43 np0005603787 python3.9[50579]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 04:53:44 np0005603787 python3.9[50702]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769853223.389916-224-99039442420999/.source _original_basename=.rfof9ss2 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:53:45 np0005603787 python3.9[50854]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:53:45 np0005603787 python3.9[51006]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 31 04:53:46 np0005603787 python3.9[51158]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:53:48 np0005603787 python3.9[51585]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 31 04:53:49 np0005603787 ansible-async_wrapper.py[51760]: Invoked with j156426978855 300 /home/zuul/.ansible/tmp/ansible-tmp-1769853228.4206827-290-191111923455231/AnsiballZ_edpm_os_net_config.py _
Jan 31 04:53:49 np0005603787 ansible-async_wrapper.py[51763]: Starting module and watcher
Jan 31 04:53:49 np0005603787 ansible-async_wrapper.py[51763]: Start watching 51764 (300)
Jan 31 04:53:49 np0005603787 ansible-async_wrapper.py[51764]: Start module (51764)
Jan 31 04:53:49 np0005603787 ansible-async_wrapper.py[51760]: Return async_wrapper task started.
Jan 31 04:53:49 np0005603787 python3.9[51765]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 31 04:53:50 np0005603787 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 31 04:53:50 np0005603787 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 31 04:53:50 np0005603787 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 31 04:53:50 np0005603787 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 31 04:53:50 np0005603787 kernel: cfg80211: failed to load regulatory.db
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.1297] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51766 uid=0 result="success"
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.1312] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51766 uid=0 result="success"
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.1810] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.1811] audit: op="connection-add" uuid="8c3b6ce2-1f6e-4652-880a-4aaf65d0cb7c" name="br-ex-br" pid=51766 uid=0 result="success"
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.1829] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.1831] audit: op="connection-add" uuid="88c9f100-02ba-465f-a1e6-6a0a978a3329" name="br-ex-port" pid=51766 uid=0 result="success"
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.1842] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.1843] audit: op="connection-add" uuid="67f131c3-9a28-4dbe-8d1e-fb66a3e2889d" name="eth1-port" pid=51766 uid=0 result="success"
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.1854] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.1855] audit: op="connection-add" uuid="ced32c32-2f27-471b-9035-0ca7e54baba5" name="vlan20-port" pid=51766 uid=0 result="success"
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.1866] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.1867] audit: op="connection-add" uuid="3fdeccd3-d06e-4b2a-9247-cf419db3a8c3" name="vlan21-port" pid=51766 uid=0 result="success"
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.1878] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.1879] audit: op="connection-add" uuid="21348b00-0acc-4d77-8c75-37994f3999de" name="vlan22-port" pid=51766 uid=0 result="success"
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.1887] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.1888] audit: op="connection-add" uuid="84185c3a-bd13-47fa-b73a-503ad12e6dd7" name="vlan23-port" pid=51766 uid=0 result="success"
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.1905] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="802-3-ethernet.mtu,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method,connection.autoconnect-priority,connection.timestamp,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51766 uid=0 result="success"
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.1919] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.1920] audit: op="connection-add" uuid="97310a89-beae-43ab-ad2b-5d7de768c48a" name="br-ex-if" pid=51766 uid=0 result="success"
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.1957] audit: op="connection-update" uuid="d3b12458-23c1-57b0-aa6a-91480d18c487" name="ci-private-network" args="ovs-interface.type,ovs-external-ids.data,ipv6.addr-gen-mode,ipv6.dns,ipv6.routing-rules,ipv6.routes,ipv6.addresses,ipv6.method,connection.port-type,connection.slave-type,connection.master,connection.timestamp,connection.controller,ipv4.dns,ipv4.routing-rules,ipv4.never-default,ipv4.routes,ipv4.addresses,ipv4.method" pid=51766 uid=0 result="success"
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.1969] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.1971] audit: op="connection-add" uuid="ee2f5ee4-5d85-4bcb-a47a-97f7b77c515b" name="vlan20-if" pid=51766 uid=0 result="success"
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.1986] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.1987] audit: op="connection-add" uuid="9c62e10f-a6e7-4a1d-b27b-84fbbf2a2f8c" name="vlan21-if" pid=51766 uid=0 result="success"
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2001] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2003] audit: op="connection-add" uuid="8a28f0f9-9c92-4d27-8d8c-3d4d0300bf3d" name="vlan22-if" pid=51766 uid=0 result="success"
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2017] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2019] audit: op="connection-add" uuid="bac16c3c-26df-4201-8c2c-9ffe9564ef6a" name="vlan23-if" pid=51766 uid=0 result="success"
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2028] audit: op="connection-delete" uuid="d613f60b-0bbf-30ef-976f-d1c33beaaf01" name="Wired connection 1" pid=51766 uid=0 result="success"
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2037] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <warn>  [1769853231.2040] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2047] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2051] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (8c3b6ce2-1f6e-4652-880a-4aaf65d0cb7c)
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2051] audit: op="connection-activate" uuid="8c3b6ce2-1f6e-4652-880a-4aaf65d0cb7c" name="br-ex-br" pid=51766 uid=0 result="success"
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2053] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <warn>  [1769853231.2054] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2058] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2061] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (88c9f100-02ba-465f-a1e6-6a0a978a3329)
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2063] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <warn>  [1769853231.2064] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2067] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2070] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (67f131c3-9a28-4dbe-8d1e-fb66a3e2889d)
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2072] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <warn>  [1769853231.2073] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2077] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2080] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (ced32c32-2f27-471b-9035-0ca7e54baba5)
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2082] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <warn>  [1769853231.2083] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2087] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2091] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (3fdeccd3-d06e-4b2a-9247-cf419db3a8c3)
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2092] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <warn>  [1769853231.2093] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2098] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2102] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (21348b00-0acc-4d77-8c75-37994f3999de)
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2103] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <warn>  [1769853231.2104] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2108] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2112] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (84185c3a-bd13-47fa-b73a-503ad12e6dd7)
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2113] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2115] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2117] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2123] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <warn>  [1769853231.2124] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2126] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2130] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (97310a89-beae-43ab-ad2b-5d7de768c48a)
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2130] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2132] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2133] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2134] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2135] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2142] device (eth1): disconnecting for new activation request.
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2142] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2144] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2145] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2146] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2148] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <warn>  [1769853231.2149] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2151] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2153] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (ee2f5ee4-5d85-4bcb-a47a-97f7b77c515b)
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2154] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2156] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2157] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2158] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2160] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <warn>  [1769853231.2160] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2162] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2164] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (9c62e10f-a6e7-4a1d-b27b-84fbbf2a2f8c)
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2165] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2167] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2168] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2168] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2170] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <warn>  [1769853231.2171] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2174] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2177] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (8a28f0f9-9c92-4d27-8d8c-3d4d0300bf3d)
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2178] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2181] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2182] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2184] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2186] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <warn>  [1769853231.2187] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2190] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2193] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (bac16c3c-26df-4201-8c2c-9ffe9564ef6a)
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2194] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2196] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2197] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2198] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2199] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2208] audit: op="device-reapply" interface="eth0" ifindex=2 args="802-3-ethernet.mtu,ipv6.addr-gen-mode,ipv6.method,connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51766 uid=0 result="success"
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2209] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2212] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2213] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2219] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2222] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2226] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2229] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2230] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 kernel: ovs-system: entered promiscuous mode
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2245] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2249] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2252] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2254] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2259] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 kernel: Timeout policy base is empty
Jan 31 04:53:51 np0005603787 systemd-udevd[51770]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2262] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2264] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2265] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2269] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2272] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2274] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2275] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2280] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2284] dhcp4 (eth0): canceled DHCP transaction
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2284] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2284] dhcp4 (eth0): state changed no lease
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2286] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2295] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2298] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51766 uid=0 result="fail" reason="Device is not activated"
Jan 31 04:53:51 np0005603787 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2375] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2379] dhcp4 (eth0): state changed new lease, address=38.129.56.90
Jan 31 04:53:51 np0005603787 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2446] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2459] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2466] device (eth1): disconnecting for new activation request.
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2466] audit: op="connection-activate" uuid="d3b12458-23c1-57b0-aa6a-91480d18c487" name="ci-private-network" pid=51766 uid=0 result="success"
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2466] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 31 04:53:51 np0005603787 kernel: br-ex: entered promiscuous mode
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2631] device (eth1): Activation: starting connection 'ci-private-network' (d3b12458-23c1-57b0-aa6a-91480d18c487)
Jan 31 04:53:51 np0005603787 kernel: vlan22: entered promiscuous mode
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2638] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2640] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2641] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2642] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2644] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2645] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2646] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2651] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Jan 31 04:53:51 np0005603787 systemd-udevd[51771]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2663] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2667] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2676] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2680] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2687] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2694] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2698] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2705] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2710] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2716] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2721] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2725] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2729] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 kernel: vlan20: entered promiscuous mode
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2734] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2739] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2745] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2762] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51766 uid=0 result="success"
Jan 31 04:53:51 np0005603787 kernel: vlan23: entered promiscuous mode
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2771] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2783] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2798] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2801] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 31 04:53:51 np0005603787 systemd-udevd[51875]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2806] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 systemd-udevd[51772]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 04:53:51 np0005603787 kernel: vlan21: entered promiscuous mode
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2840] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2842] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2849] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2861] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2866] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2871] device (eth1): Activation: successful, device activated.
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2882] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2883] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2886] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2893] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2897] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2902] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2906] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2912] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2913] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2915] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2992] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.2994] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.3011] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.3017] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.3032] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.3033] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.3037] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.3044] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.3045] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 04:53:51 np0005603787 NetworkManager[48992]: <info>  [1769853231.3048] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 04:53:52 np0005603787 NetworkManager[48992]: <info>  [1769853232.4696] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51766 uid=0 result="success"
Jan 31 04:53:52 np0005603787 NetworkManager[48992]: <info>  [1769853232.6206] checkpoint[0x562d27b6d950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 31 04:53:52 np0005603787 NetworkManager[48992]: <info>  [1769853232.6208] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51766 uid=0 result="success"
Jan 31 04:53:52 np0005603787 NetworkManager[48992]: <info>  [1769853232.9345] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51766 uid=0 result="success"
Jan 31 04:53:52 np0005603787 NetworkManager[48992]: <info>  [1769853232.9357] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51766 uid=0 result="success"
Jan 31 04:53:53 np0005603787 python3.9[52125]: ansible-ansible.legacy.async_status Invoked with jid=j156426978855.51760 mode=status _async_dir=/root/.ansible_async
Jan 31 04:53:53 np0005603787 NetworkManager[48992]: <info>  [1769853233.1632] audit: op="networking-control" arg="global-dns-configuration" pid=51766 uid=0 result="success"
Jan 31 04:53:53 np0005603787 NetworkManager[48992]: <info>  [1769853233.1664] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 31 04:53:53 np0005603787 NetworkManager[48992]: <info>  [1769853233.1706] audit: op="networking-control" arg="global-dns-configuration" pid=51766 uid=0 result="success"
Jan 31 04:53:53 np0005603787 NetworkManager[48992]: <info>  [1769853233.1732] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51766 uid=0 result="success"
Jan 31 04:53:53 np0005603787 NetworkManager[48992]: <info>  [1769853233.3243] checkpoint[0x562d27b6da20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 31 04:53:53 np0005603787 NetworkManager[48992]: <info>  [1769853233.3247] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51766 uid=0 result="success"
Jan 31 04:53:53 np0005603787 ansible-async_wrapper.py[51764]: Module complete (51764)
Jan 31 04:53:54 np0005603787 ansible-async_wrapper.py[51763]: Done in kid B.
Jan 31 04:53:56 np0005603787 python3.9[52230]: ansible-ansible.legacy.async_status Invoked with jid=j156426978855.51760 mode=status _async_dir=/root/.ansible_async
Jan 31 04:53:56 np0005603787 python3.9[52330]: ansible-ansible.legacy.async_status Invoked with jid=j156426978855.51760 mode=cleanup _async_dir=/root/.ansible_async
Jan 31 04:53:57 np0005603787 python3.9[52482]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 04:53:58 np0005603787 python3.9[52605]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769853237.1104476-317-232551730645464/.source.returncode _original_basename=.busgwqh0 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:53:58 np0005603787 python3.9[52757]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 04:53:59 np0005603787 python3.9[52880]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769853238.279262-333-108202680371924/.source.cfg _original_basename=.jd2yd5xp follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:53:59 np0005603787 python3.9[53032]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 04:53:59 np0005603787 systemd[1]: Reloading Network Manager...
Jan 31 04:53:59 np0005603787 NetworkManager[48992]: <info>  [1769853239.8650] audit: op="reload" arg="0" pid=53037 uid=0 result="success"
Jan 31 04:53:59 np0005603787 NetworkManager[48992]: <info>  [1769853239.8656] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 31 04:53:59 np0005603787 systemd[1]: Reloaded Network Manager.
Jan 31 04:54:00 np0005603787 systemd[1]: session-10.scope: Deactivated successfully.
Jan 31 04:54:00 np0005603787 systemd[1]: session-10.scope: Consumed 45.140s CPU time.
Jan 31 04:54:00 np0005603787 systemd-logind[786]: Session 10 logged out. Waiting for processes to exit.
Jan 31 04:54:00 np0005603787 systemd-logind[786]: Removed session 10.
Jan 31 04:54:00 np0005603787 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 04:54:06 np0005603787 systemd-logind[786]: New session 11 of user zuul.
Jan 31 04:54:06 np0005603787 systemd[1]: Started Session 11 of User zuul.
Jan 31 04:54:07 np0005603787 python3.9[53223]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 04:54:08 np0005603787 python3.9[53377]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 04:54:09 np0005603787 python3.9[53570]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:54:09 np0005603787 systemd-logind[786]: Session 11 logged out. Waiting for processes to exit.
Jan 31 04:54:09 np0005603787 systemd[1]: session-11.scope: Deactivated successfully.
Jan 31 04:54:09 np0005603787 systemd[1]: session-11.scope: Consumed 1.876s CPU time.
Jan 31 04:54:09 np0005603787 systemd-logind[786]: Removed session 11.
Jan 31 04:54:09 np0005603787 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 04:54:14 np0005603787 systemd-logind[786]: New session 12 of user zuul.
Jan 31 04:54:14 np0005603787 systemd[1]: Started Session 12 of User zuul.
Jan 31 04:54:15 np0005603787 python3.9[53753]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 04:54:16 np0005603787 python3.9[53908]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 04:54:17 np0005603787 python3.9[54064]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 04:54:18 np0005603787 python3.9[54148]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 04:54:20 np0005603787 python3.9[54302]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 04:54:20 np0005603787 python3.9[54497]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:54:21 np0005603787 python3.9[54649]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:54:21 np0005603787 podman[54650]: 2026-01-31 09:54:21.694057597 +0000 UTC m=+0.053256236 system refresh
Jan 31 04:54:22 np0005603787 python3.9[54812]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 04:54:22 np0005603787 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 04:54:23 np0005603787 python3.9[54935]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769853261.89095-74-200402932108101/.source.json follow=False _original_basename=podman_network_config.j2 checksum=504cf97170a19be1a6d5fc9891bc3c80c6f40d57 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:54:23 np0005603787 python3.9[55087]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 04:54:24 np0005603787 python3.9[55210]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769853263.2042603-89-22185194753348/.source.conf follow=False _original_basename=registries.conf.j2 checksum=3be7a60f934c092075c2da93762d4d72f2e4c224 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 04:54:24 np0005603787 python3.9[55362]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 04:54:25 np0005603787 python3.9[55514]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 04:54:25 np0005603787 python3.9[55666]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 04:54:26 np0005603787 python3.9[55818]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 04:54:27 np0005603787 python3.9[55970]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 04:54:29 np0005603787 python3.9[56123]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 04:54:29 np0005603787 python3.9[56277]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 04:54:30 np0005603787 python3.9[56429]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 04:54:30 np0005603787 python3.9[56581]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:54:31 np0005603787 python3.9[56734]: ansible-service_facts Invoked
Jan 31 04:54:31 np0005603787 network[56751]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 04:54:31 np0005603787 network[56752]: 'network-scripts' will be removed from distribution in near future.
Jan 31 04:54:31 np0005603787 network[56753]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 04:54:35 np0005603787 python3.9[57205]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 04:54:37 np0005603787 python3.9[57358]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 31 04:54:39 np0005603787 python3.9[57510]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 04:54:39 np0005603787 python3.9[57635]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769853278.6349835-233-169123513890203/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:54:40 np0005603787 python3.9[57789]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 04:54:40 np0005603787 python3.9[57914]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769853279.8930745-248-170026933933600/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:54:41 np0005603787 python3.9[58068]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:54:43 np0005603787 python3.9[58222]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 04:54:44 np0005603787 python3.9[58306]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 04:54:45 np0005603787 python3.9[58460]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 04:54:45 np0005603787 python3.9[58544]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 04:54:45 np0005603787 chronyd[800]: chronyd exiting
Jan 31 04:54:45 np0005603787 systemd[1]: Stopping NTP client/server...
Jan 31 04:54:45 np0005603787 systemd[1]: chronyd.service: Deactivated successfully.
Jan 31 04:54:45 np0005603787 systemd[1]: Stopped NTP client/server.
Jan 31 04:54:45 np0005603787 systemd[1]: Starting NTP client/server...
Jan 31 04:54:45 np0005603787 chronyd[58552]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 31 04:54:45 np0005603787 chronyd[58552]: Frequency -26.043 +/- 0.194 ppm read from /var/lib/chrony/drift
Jan 31 04:54:45 np0005603787 chronyd[58552]: Loaded seccomp filter (level 2)
Jan 31 04:54:45 np0005603787 systemd[1]: Started NTP client/server.
Jan 31 04:54:46 np0005603787 systemd[1]: session-12.scope: Deactivated successfully.
Jan 31 04:54:46 np0005603787 systemd[1]: session-12.scope: Consumed 22.248s CPU time.
Jan 31 04:54:46 np0005603787 systemd-logind[786]: Session 12 logged out. Waiting for processes to exit.
Jan 31 04:54:46 np0005603787 systemd-logind[786]: Removed session 12.
Jan 31 04:54:52 np0005603787 systemd-logind[786]: New session 13 of user zuul.
Jan 31 04:54:52 np0005603787 systemd[1]: Started Session 13 of User zuul.
Jan 31 04:54:52 np0005603787 python3.9[58733]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:54:53 np0005603787 python3.9[58885]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 04:54:54 np0005603787 python3.9[59008]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769853292.953797-29-24179512184568/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:54:54 np0005603787 systemd-logind[786]: Session 13 logged out. Waiting for processes to exit.
Jan 31 04:54:54 np0005603787 systemd[1]: session-13.scope: Deactivated successfully.
Jan 31 04:54:54 np0005603787 systemd[1]: session-13.scope: Consumed 1.420s CPU time.
Jan 31 04:54:54 np0005603787 systemd-logind[786]: Removed session 13.
Jan 31 04:55:00 np0005603787 systemd-logind[786]: New session 14 of user zuul.
Jan 31 04:55:00 np0005603787 systemd[1]: Started Session 14 of User zuul.
Jan 31 04:55:01 np0005603787 python3.9[59186]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 04:55:02 np0005603787 python3.9[59342]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:55:02 np0005603787 python3.9[59517]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 04:55:03 np0005603787 python3.9[59640]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1769853302.3159761-36-113384102078294/.source.json _original_basename=.pcf8pqmu follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:55:04 np0005603787 python3.9[59792]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 04:55:04 np0005603787 python3.9[59915]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769853303.8829007-59-258634330969646/.source _original_basename=.i8vqiog9 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:55:05 np0005603787 python3.9[60067]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 04:55:05 np0005603787 python3.9[60219]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 04:55:06 np0005603787 python3.9[60342]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769853305.544146-83-195714935536990/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 04:55:06 np0005603787 python3.9[60494]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 04:55:07 np0005603787 python3.9[60617]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769853306.5173988-83-269081471039105/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 04:55:07 np0005603787 python3.9[60769]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:55:08 np0005603787 python3.9[60921]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 04:55:08 np0005603787 python3.9[61044]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769853308.079779-120-84788463779026/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:55:09 np0005603787 python3.9[61196]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 04:55:09 np0005603787 python3.9[61319]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769853309.0671084-135-218181815179895/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:55:10 np0005603787 python3.9[61471]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 04:55:10 np0005603787 systemd[1]: Reloading.
Jan 31 04:55:10 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:55:10 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 04:55:11 np0005603787 systemd[1]: Reloading.
Jan 31 04:55:11 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:55:11 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 04:55:11 np0005603787 systemd[1]: Starting EDPM Container Shutdown...
Jan 31 04:55:11 np0005603787 systemd[1]: Finished EDPM Container Shutdown.
Jan 31 04:55:11 np0005603787 python3.9[61698]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 04:55:12 np0005603787 python3.9[61821]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769853311.443018-158-226658549931562/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:55:12 np0005603787 python3.9[61973]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 04:55:13 np0005603787 python3.9[62096]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769853312.4364974-173-242341314240404/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:55:13 np0005603787 python3.9[62248]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 04:55:13 np0005603787 systemd[1]: Reloading.
Jan 31 04:55:13 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:55:13 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 04:55:14 np0005603787 systemd[1]: Reloading.
Jan 31 04:55:14 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 04:55:14 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:55:14 np0005603787 systemd[1]: Starting Create netns directory...
Jan 31 04:55:14 np0005603787 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 04:55:14 np0005603787 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 04:55:14 np0005603787 systemd[1]: Finished Create netns directory.
Jan 31 04:55:15 np0005603787 python3.9[62473]: ansible-ansible.builtin.service_facts Invoked
Jan 31 04:55:15 np0005603787 network[62490]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 04:55:15 np0005603787 network[62491]: 'network-scripts' will be removed from distribution in near future.
Jan 31 04:55:15 np0005603787 network[62492]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 04:55:17 np0005603787 python3.9[62754]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 04:55:17 np0005603787 systemd[1]: Reloading.
Jan 31 04:55:17 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:55:17 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 04:55:17 np0005603787 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 31 04:55:18 np0005603787 iptables.init[62794]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 31 04:55:18 np0005603787 iptables.init[62794]: iptables: Flushing firewall rules: [  OK  ]
Jan 31 04:55:18 np0005603787 systemd[1]: iptables.service: Deactivated successfully.
Jan 31 04:55:18 np0005603787 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 31 04:55:18 np0005603787 python3.9[62991]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 04:55:19 np0005603787 python3.9[63145]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 04:55:19 np0005603787 systemd[1]: Reloading.
Jan 31 04:55:19 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:55:19 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 04:55:19 np0005603787 systemd[1]: Starting Netfilter Tables...
Jan 31 04:55:19 np0005603787 systemd[1]: Finished Netfilter Tables.
Jan 31 04:55:20 np0005603787 python3.9[63337]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:55:21 np0005603787 python3.9[63490]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 04:55:21 np0005603787 python3.9[63615]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769853321.049486-242-68489571731856/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:55:22 np0005603787 python3.9[63768]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 04:55:22 np0005603787 systemd[1]: Reloading OpenSSH server daemon...
Jan 31 04:55:22 np0005603787 systemd[1]: Reloaded OpenSSH server daemon.
Jan 31 04:55:23 np0005603787 python3.9[63924]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:55:23 np0005603787 python3.9[64076]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 04:55:24 np0005603787 python3.9[64199]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769853323.4689803-273-101290954960237/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:55:25 np0005603787 python3.9[64351]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 31 04:55:25 np0005603787 systemd[1]: Starting Time & Date Service...
Jan 31 04:55:25 np0005603787 systemd[1]: Started Time & Date Service.
Jan 31 04:55:26 np0005603787 python3.9[64507]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:55:26 np0005603787 python3.9[64659]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 04:55:27 np0005603787 python3.9[64782]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769853326.2278943-308-201312757440129/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:55:27 np0005603787 python3.9[64934]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 04:55:28 np0005603787 python3.9[65057]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769853327.333128-323-169646216986597/.source.yaml _original_basename=.z1_7u5kb follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:55:28 np0005603787 python3.9[65209]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 04:55:29 np0005603787 python3.9[65332]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769853328.445257-338-144916915319042/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:55:29 np0005603787 python3.9[65484]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:55:30 np0005603787 python3.9[65637]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:55:31 np0005603787 python3[65790]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 04:55:31 np0005603787 python3.9[65942]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 04:55:32 np0005603787 python3.9[66065]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769853331.4467494-377-84283321520641/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:55:32 np0005603787 python3.9[66217]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 04:55:33 np0005603787 python3.9[66340]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769853332.4655383-392-141146308673345/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:55:34 np0005603787 python3.9[66492]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 04:55:34 np0005603787 python3.9[66615]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769853333.6148746-407-107150880680052/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:55:35 np0005603787 python3.9[66767]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 04:55:35 np0005603787 python3.9[66890]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769853334.7480094-422-4738472940419/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:55:36 np0005603787 python3.9[67042]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 04:55:36 np0005603787 python3.9[67165]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769853335.8473432-437-100724619803066/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:55:37 np0005603787 python3.9[67317]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:55:38 np0005603787 python3.9[67469]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:55:39 np0005603787 python3.9[67628]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:55:39 np0005603787 python3.9[67781]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:55:40 np0005603787 python3.9[67933]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:55:41 np0005603787 python3.9[68085]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 04:55:41 np0005603787 python3.9[68238]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 04:55:42 np0005603787 systemd[1]: session-14.scope: Deactivated successfully.
Jan 31 04:55:42 np0005603787 systemd[1]: session-14.scope: Consumed 28.858s CPU time.
Jan 31 04:55:42 np0005603787 systemd-logind[786]: Session 14 logged out. Waiting for processes to exit.
Jan 31 04:55:42 np0005603787 systemd-logind[786]: Removed session 14.
Jan 31 04:55:47 np0005603787 systemd-logind[786]: New session 15 of user zuul.
Jan 31 04:55:47 np0005603787 systemd[1]: Started Session 15 of User zuul.
Jan 31 04:55:48 np0005603787 python3.9[68419]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 31 04:55:48 np0005603787 python3.9[68571]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 04:55:49 np0005603787 python3.9[68723]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 04:55:50 np0005603787 python3.9[68875]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1vH7MPTZElmImL3pKNK6rcC7PaBiA/gXXchLJiHq8OhWrBXBDICCBaBd3JU+sJLMp0KfAlpfLJeEqGnLXoDdzfGnNa2s41mFsJIm5PFrKJziX/K2IUIaV+27aPCJSbe4yxAwAPuOrG0UKnLVQXeUE+idlMM/5sJ32u0KOgTFOJfm6gTtyTvjSChIsyea6pjh1Oas8NsEJWPnm7eTWMNUTVper1Mfq2di7Wxl7g2mnQF1f9lZXEpFLYSUOeW/LDcYrt+KmOzwdie7bBa6ut3XLu/GqmXCIdQJivf3YafIEey8HUCoap0CD/67J3TL4GNWYpSLyHZ8+tnyH1o1DUopQcEQq82YPETZbz7m1SZNdkTW7urc/T/YUYXB9OqoZTcdMQTxcBezQtLR6pLwlk79kXmSexhw9XZKt26D7SkxWO3XkJDehe+JOQ283gENR0Bi9xjRNSeLFeZczbM8LgeTOtjsYVWDaSCERMK30es99a43jOHJvgQc8KaYKo9iihc8=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG5e/QGBKdCU0MiCMKtAS5faK6scEANXhee3MrXfRe5T#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGmdw8ziRFF+QShjWCTje17+56t1rJ+wJUoJrhdtL1Gsz/IovFuhm/YW1sC1ANbhgzpetMbHVKF09oEYGtwR+74=#012 create=True mode=0644 path=/tmp/ansible.vhwiq61a state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:55:51 np0005603787 python3.9[69027]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.vhwiq61a' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:55:52 np0005603787 python3.9[69181]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.vhwiq61a state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:55:52 np0005603787 systemd[1]: session-15.scope: Deactivated successfully.
Jan 31 04:55:52 np0005603787 systemd[1]: session-15.scope: Consumed 2.896s CPU time.
Jan 31 04:55:52 np0005603787 systemd-logind[786]: Session 15 logged out. Waiting for processes to exit.
Jan 31 04:55:52 np0005603787 systemd-logind[786]: Removed session 15.
Jan 31 04:55:55 np0005603787 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 31 04:55:57 np0005603787 systemd-logind[786]: New session 16 of user zuul.
Jan 31 04:55:57 np0005603787 systemd[1]: Started Session 16 of User zuul.
Jan 31 04:55:58 np0005603787 python3.9[69362]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 04:55:59 np0005603787 python3.9[69518]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 31 04:56:00 np0005603787 python3.9[69672]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 04:56:01 np0005603787 python3.9[69825]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:56:02 np0005603787 python3.9[69978]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 04:56:02 np0005603787 python3.9[70132]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:56:03 np0005603787 python3.9[70287]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:56:03 np0005603787 systemd-logind[786]: Session 16 logged out. Waiting for processes to exit.
Jan 31 04:56:03 np0005603787 systemd[1]: session-16.scope: Deactivated successfully.
Jan 31 04:56:03 np0005603787 systemd[1]: session-16.scope: Consumed 3.780s CPU time.
Jan 31 04:56:03 np0005603787 systemd-logind[786]: Removed session 16.
Jan 31 04:56:08 np0005603787 systemd-logind[786]: New session 17 of user zuul.
Jan 31 04:56:08 np0005603787 systemd[1]: Started Session 17 of User zuul.
Jan 31 04:56:09 np0005603787 python3.9[70465]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 04:56:10 np0005603787 python3.9[70621]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 04:56:11 np0005603787 python3.9[70705]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 04:56:13 np0005603787 python3.9[70856]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:56:14 np0005603787 python3.9[71007]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 04:56:15 np0005603787 python3.9[71157]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 04:56:15 np0005603787 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 04:56:16 np0005603787 python3.9[71308]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 04:56:16 np0005603787 systemd[1]: session-17.scope: Deactivated successfully.
Jan 31 04:56:16 np0005603787 systemd[1]: session-17.scope: Consumed 5.310s CPU time.
Jan 31 04:56:16 np0005603787 systemd-logind[786]: Session 17 logged out. Waiting for processes to exit.
Jan 31 04:56:16 np0005603787 systemd-logind[786]: Removed session 17.
Jan 31 04:56:24 np0005603787 systemd-logind[786]: New session 18 of user zuul.
Jan 31 04:56:24 np0005603787 systemd[1]: Started Session 18 of User zuul.
Jan 31 04:56:29 np0005603787 python3[72074]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 04:56:30 np0005603787 python3[72169]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 04:56:32 np0005603787 python3[72196]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 04:56:32 np0005603787 python3[72222]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:56:32 np0005603787 kernel: loop: module loaded
Jan 31 04:56:32 np0005603787 kernel: loop3: detected capacity change from 0 to 41943040
Jan 31 04:56:32 np0005603787 python3[72257]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:56:32 np0005603787 lvm[72260]: PV /dev/loop3 not used.
Jan 31 04:56:32 np0005603787 lvm[72269]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 04:56:32 np0005603787 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Jan 31 04:56:32 np0005603787 lvm[72271]:  1 logical volume(s) in volume group "ceph_vg0" now active
Jan 31 04:56:33 np0005603787 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Jan 31 04:56:33 np0005603787 python3[72349]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 04:56:33 np0005603787 python3[72422]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769853393.0811405-36267-87478850083264/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:56:34 np0005603787 python3[72472]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 04:56:34 np0005603787 systemd[1]: Reloading.
Jan 31 04:56:34 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:56:34 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 04:56:34 np0005603787 systemd[1]: Starting Ceph OSD losetup...
Jan 31 04:56:34 np0005603787 bash[72512]: /dev/loop3: [64513]:4355719 (/var/lib/ceph-osd-0.img)
Jan 31 04:56:34 np0005603787 systemd[1]: Finished Ceph OSD losetup.
Jan 31 04:56:34 np0005603787 lvm[72513]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 04:56:34 np0005603787 lvm[72513]: VG ceph_vg0 finished
Jan 31 04:56:35 np0005603787 python3[72539]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 04:56:36 np0005603787 python3[72566]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 04:56:36 np0005603787 python3[72592]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:56:36 np0005603787 kernel: loop4: detected capacity change from 0 to 41943040
Jan 31 04:56:37 np0005603787 python3[72624]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:56:37 np0005603787 lvm[72627]: PV /dev/loop4 not used.
Jan 31 04:56:37 np0005603787 lvm[72629]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 04:56:37 np0005603787 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Jan 31 04:56:37 np0005603787 lvm[72635]:  1 logical volume(s) in volume group "ceph_vg1" now active
Jan 31 04:56:37 np0005603787 lvm[72640]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 04:56:37 np0005603787 lvm[72640]: VG ceph_vg1 finished
Jan 31 04:56:37 np0005603787 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Jan 31 04:56:37 np0005603787 python3[72719]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 04:56:38 np0005603787 python3[72792]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769853397.604107-36294-179489317563741/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:56:38 np0005603787 python3[72842]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 04:56:38 np0005603787 systemd[1]: Reloading.
Jan 31 04:56:38 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:56:38 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 04:56:38 np0005603787 systemd[1]: Starting Ceph OSD losetup...
Jan 31 04:56:38 np0005603787 bash[72882]: /dev/loop4: [64513]:4355721 (/var/lib/ceph-osd-1.img)
Jan 31 04:56:38 np0005603787 systemd[1]: Finished Ceph OSD losetup.
Jan 31 04:56:38 np0005603787 lvm[72883]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 04:56:38 np0005603787 lvm[72883]: VG ceph_vg1 finished
Jan 31 04:56:39 np0005603787 python3[72909]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 04:56:40 np0005603787 python3[72936]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 04:56:40 np0005603787 python3[72962]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G#012losetup /dev/loop5 /var/lib/ceph-osd-2.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:56:40 np0005603787 kernel: loop5: detected capacity change from 0 to 41943040
Jan 31 04:56:41 np0005603787 python3[72994]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5#012vgcreate ceph_vg2 /dev/loop5#012lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:56:41 np0005603787 lvm[72997]: PV /dev/loop5 not used.
Jan 31 04:56:41 np0005603787 lvm[72999]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 04:56:41 np0005603787 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Jan 31 04:56:41 np0005603787 lvm[73002]:  1 logical volume(s) in volume group "ceph_vg2" now active
Jan 31 04:56:41 np0005603787 lvm[73009]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 04:56:41 np0005603787 lvm[73009]: VG ceph_vg2 finished
Jan 31 04:56:41 np0005603787 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Jan 31 04:56:42 np0005603787 python3[73087]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 04:56:42 np0005603787 python3[73160]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769853401.7435193-36321-103643828193396/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:56:42 np0005603787 python3[73210]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 04:56:42 np0005603787 systemd[1]: Reloading.
Jan 31 04:56:42 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:56:42 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 04:56:43 np0005603787 systemd[1]: Starting Ceph OSD losetup...
Jan 31 04:56:43 np0005603787 bash[73251]: /dev/loop5: [64513]:4355722 (/var/lib/ceph-osd-2.img)
Jan 31 04:56:43 np0005603787 systemd[1]: Finished Ceph OSD losetup.
Jan 31 04:56:43 np0005603787 lvm[73252]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 04:56:43 np0005603787 lvm[73252]: VG ceph_vg2 finished
Jan 31 04:56:44 np0005603787 python3[73276]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 04:56:46 np0005603787 python3[73369]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-tentacle'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 04:56:49 np0005603787 python3[73426]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 04:56:53 np0005603787 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 04:56:53 np0005603787 systemd[1]: Starting man-db-cache-update.service...
Jan 31 04:56:54 np0005603787 python3[73545]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 04:56:54 np0005603787 python3[73573]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:56:54 np0005603787 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 04:56:54 np0005603787 systemd[1]: Finished man-db-cache-update.service.
Jan 31 04:56:54 np0005603787 systemd[1]: run-r21619e8a810f4c0c93aa426bf8d97055.service: Deactivated successfully.
Jan 31 04:56:55 np0005603787 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 04:56:55 np0005603787 chronyd[58552]: Selected source 162.159.200.123 (pool.ntp.org)
Jan 31 04:56:55 np0005603787 python3[73613]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:56:55 np0005603787 python3[73639]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:56:56 np0005603787 python3[73717]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 04:56:56 np0005603787 python3[73790]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769853416.2806072-36469-243509820080049/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:56:57 np0005603787 python3[73892]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 04:56:57 np0005603787 python3[73965]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769853417.2460144-36487-12435997316126/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:56:58 np0005603787 python3[74015]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 04:56:58 np0005603787 python3[74043]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 04:56:58 np0005603787 python3[74071]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 04:56:58 np0005603787 python3[74097]: ansible-ansible.builtin.stat Invoked with path=/tmp/cephadm_registry.json follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 04:56:59 np0005603787 python3[74123]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61 --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:56:59 np0005603787 systemd[1]: Created slice User Slice of UID 42477.
Jan 31 04:56:59 np0005603787 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 31 04:56:59 np0005603787 systemd-logind[786]: New session 19 of user ceph-admin.
Jan 31 04:56:59 np0005603787 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 31 04:56:59 np0005603787 systemd[1]: Starting User Manager for UID 42477...
Jan 31 04:56:59 np0005603787 systemd[74131]: Queued start job for default target Main User Target.
Jan 31 04:56:59 np0005603787 systemd[74131]: Created slice User Application Slice.
Jan 31 04:56:59 np0005603787 systemd[74131]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 04:56:59 np0005603787 systemd[74131]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 04:56:59 np0005603787 systemd[74131]: Reached target Paths.
Jan 31 04:56:59 np0005603787 systemd[74131]: Reached target Timers.
Jan 31 04:56:59 np0005603787 systemd[74131]: Starting D-Bus User Message Bus Socket...
Jan 31 04:56:59 np0005603787 systemd[74131]: Starting Create User's Volatile Files and Directories...
Jan 31 04:56:59 np0005603787 systemd[74131]: Finished Create User's Volatile Files and Directories.
Jan 31 04:56:59 np0005603787 systemd[74131]: Listening on D-Bus User Message Bus Socket.
Jan 31 04:56:59 np0005603787 systemd[74131]: Reached target Sockets.
Jan 31 04:56:59 np0005603787 systemd[74131]: Reached target Basic System.
Jan 31 04:56:59 np0005603787 systemd[74131]: Reached target Main User Target.
Jan 31 04:56:59 np0005603787 systemd[74131]: Startup finished in 115ms.
Jan 31 04:56:59 np0005603787 systemd[1]: Started User Manager for UID 42477.
Jan 31 04:56:59 np0005603787 systemd[1]: Started Session 19 of User ceph-admin.
Jan 31 04:56:59 np0005603787 systemd[1]: session-19.scope: Deactivated successfully.
Jan 31 04:56:59 np0005603787 systemd-logind[786]: Session 19 logged out. Waiting for processes to exit.
Jan 31 04:56:59 np0005603787 systemd-logind[786]: Removed session 19.
Jan 31 04:56:59 np0005603787 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 04:56:59 np0005603787 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 04:57:02 np0005603787 systemd[1]: var-lib-containers-storage-overlay-compat905407680-lower\x2dmapped.mount: Deactivated successfully.
Jan 31 04:57:10 np0005603787 systemd[1]: Stopping User Manager for UID 42477...
Jan 31 04:57:10 np0005603787 systemd[74131]: Activating special unit Exit the Session...
Jan 31 04:57:10 np0005603787 systemd[74131]: Stopped target Main User Target.
Jan 31 04:57:10 np0005603787 systemd[74131]: Stopped target Basic System.
Jan 31 04:57:10 np0005603787 systemd[74131]: Stopped target Paths.
Jan 31 04:57:10 np0005603787 systemd[74131]: Stopped target Sockets.
Jan 31 04:57:10 np0005603787 systemd[74131]: Stopped target Timers.
Jan 31 04:57:10 np0005603787 systemd[74131]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 31 04:57:10 np0005603787 systemd[74131]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 31 04:57:10 np0005603787 systemd[74131]: Closed D-Bus User Message Bus Socket.
Jan 31 04:57:10 np0005603787 systemd[74131]: Stopped Create User's Volatile Files and Directories.
Jan 31 04:57:10 np0005603787 systemd[74131]: Removed slice User Application Slice.
Jan 31 04:57:10 np0005603787 systemd[74131]: Reached target Shutdown.
Jan 31 04:57:10 np0005603787 systemd[74131]: Finished Exit the Session.
Jan 31 04:57:10 np0005603787 systemd[74131]: Reached target Exit the Session.
Jan 31 04:57:10 np0005603787 systemd[1]: user@42477.service: Deactivated successfully.
Jan 31 04:57:10 np0005603787 systemd[1]: Stopped User Manager for UID 42477.
Jan 31 04:57:10 np0005603787 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Jan 31 04:57:10 np0005603787 systemd[1]: run-user-42477.mount: Deactivated successfully.
Jan 31 04:57:10 np0005603787 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Jan 31 04:57:10 np0005603787 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Jan 31 04:57:10 np0005603787 systemd[1]: Removed slice User Slice of UID 42477.
Jan 31 04:57:15 np0005603787 podman[74224]: 2026-01-31 09:57:15.840049637 +0000 UTC m=+15.800660635 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:57:15 np0005603787 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 04:57:15 np0005603787 podman[74284]: 2026-01-31 09:57:15.911210371 +0000 UTC m=+0.053055331 container create e3ed9e3ca09fe03d4c01a55779879015a195d5393590156cdd6f009ceed6a930 (image=quay.io/ceph/ceph:v20, name=musing_ellis, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:57:15 np0005603787 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck3046958395-merged.mount: Deactivated successfully.
Jan 31 04:57:15 np0005603787 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 31 04:57:15 np0005603787 systemd[1]: Started libpod-conmon-e3ed9e3ca09fe03d4c01a55779879015a195d5393590156cdd6f009ceed6a930.scope.
Jan 31 04:57:15 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:57:15 np0005603787 podman[74284]: 2026-01-31 09:57:15.885434461 +0000 UTC m=+0.027279461 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:57:16 np0005603787 podman[74284]: 2026-01-31 09:57:16.002532525 +0000 UTC m=+0.144377515 container init e3ed9e3ca09fe03d4c01a55779879015a195d5393590156cdd6f009ceed6a930 (image=quay.io/ceph/ceph:v20, name=musing_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:57:16 np0005603787 podman[74284]: 2026-01-31 09:57:16.008484264 +0000 UTC m=+0.150329224 container start e3ed9e3ca09fe03d4c01a55779879015a195d5393590156cdd6f009ceed6a930 (image=quay.io/ceph/ceph:v20, name=musing_ellis, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 04:57:16 np0005603787 podman[74284]: 2026-01-31 09:57:16.011780862 +0000 UTC m=+0.153625832 container attach e3ed9e3ca09fe03d4c01a55779879015a195d5393590156cdd6f009ceed6a930 (image=quay.io/ceph/ceph:v20, name=musing_ellis, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 04:57:16 np0005603787 musing_ellis[74299]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Jan 31 04:57:16 np0005603787 systemd[1]: libpod-e3ed9e3ca09fe03d4c01a55779879015a195d5393590156cdd6f009ceed6a930.scope: Deactivated successfully.
Jan 31 04:57:16 np0005603787 podman[74284]: 2026-01-31 09:57:16.092949264 +0000 UTC m=+0.234794234 container died e3ed9e3ca09fe03d4c01a55779879015a195d5393590156cdd6f009ceed6a930 (image=quay.io/ceph/ceph:v20, name=musing_ellis, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:57:16 np0005603787 systemd[1]: var-lib-containers-storage-overlay-96836811756d7b34bfcfd6e73fefb51a8a29908c17aca1ce9cf5b1e874b43818-merged.mount: Deactivated successfully.
Jan 31 04:57:16 np0005603787 podman[74284]: 2026-01-31 09:57:16.14141041 +0000 UTC m=+0.283255370 container remove e3ed9e3ca09fe03d4c01a55779879015a195d5393590156cdd6f009ceed6a930 (image=quay.io/ceph/ceph:v20, name=musing_ellis, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 04:57:16 np0005603787 systemd[1]: libpod-conmon-e3ed9e3ca09fe03d4c01a55779879015a195d5393590156cdd6f009ceed6a930.scope: Deactivated successfully.
Jan 31 04:57:16 np0005603787 podman[74316]: 2026-01-31 09:57:16.190485683 +0000 UTC m=+0.037705129 container create 5e6e5f2f452e80a7b9d2f6dae4a215bb920700c1ec07f255a58f5367fced62b4 (image=quay.io/ceph/ceph:v20, name=clever_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 04:57:16 np0005603787 systemd[1]: Started libpod-conmon-5e6e5f2f452e80a7b9d2f6dae4a215bb920700c1ec07f255a58f5367fced62b4.scope.
Jan 31 04:57:16 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:57:16 np0005603787 podman[74316]: 2026-01-31 09:57:16.243917144 +0000 UTC m=+0.091136600 container init 5e6e5f2f452e80a7b9d2f6dae4a215bb920700c1ec07f255a58f5367fced62b4 (image=quay.io/ceph/ceph:v20, name=clever_mcclintock, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030)
Jan 31 04:57:16 np0005603787 podman[74316]: 2026-01-31 09:57:16.248718362 +0000 UTC m=+0.095937808 container start 5e6e5f2f452e80a7b9d2f6dae4a215bb920700c1ec07f255a58f5367fced62b4 (image=quay.io/ceph/ceph:v20, name=clever_mcclintock, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 04:57:16 np0005603787 clever_mcclintock[74332]: 167 167
Jan 31 04:57:16 np0005603787 systemd[1]: libpod-5e6e5f2f452e80a7b9d2f6dae4a215bb920700c1ec07f255a58f5367fced62b4.scope: Deactivated successfully.
Jan 31 04:57:16 np0005603787 podman[74316]: 2026-01-31 09:57:16.253267643 +0000 UTC m=+0.100487119 container attach 5e6e5f2f452e80a7b9d2f6dae4a215bb920700c1ec07f255a58f5367fced62b4 (image=quay.io/ceph/ceph:v20, name=clever_mcclintock, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:57:16 np0005603787 podman[74316]: 2026-01-31 09:57:16.25389058 +0000 UTC m=+0.101110026 container died 5e6e5f2f452e80a7b9d2f6dae4a215bb920700c1ec07f255a58f5367fced62b4 (image=quay.io/ceph/ceph:v20, name=clever_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 04:57:16 np0005603787 podman[74316]: 2026-01-31 09:57:16.171383383 +0000 UTC m=+0.018602859 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:57:16 np0005603787 podman[74316]: 2026-01-31 09:57:16.304250038 +0000 UTC m=+0.151469484 container remove 5e6e5f2f452e80a7b9d2f6dae4a215bb920700c1ec07f255a58f5367fced62b4 (image=quay.io/ceph/ceph:v20, name=clever_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:57:16 np0005603787 systemd[1]: libpod-conmon-5e6e5f2f452e80a7b9d2f6dae4a215bb920700c1ec07f255a58f5367fced62b4.scope: Deactivated successfully.
Jan 31 04:57:16 np0005603787 podman[74349]: 2026-01-31 09:57:16.363327559 +0000 UTC m=+0.044509183 container create 489ee7c1970b6d50f9d81096e251bb2716d9853e23650ec57ae75bc73f557233 (image=quay.io/ceph/ceph:v20, name=kind_hugle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 04:57:16 np0005603787 systemd[1]: Started libpod-conmon-489ee7c1970b6d50f9d81096e251bb2716d9853e23650ec57ae75bc73f557233.scope.
Jan 31 04:57:16 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:57:16 np0005603787 podman[74349]: 2026-01-31 09:57:16.430984149 +0000 UTC m=+0.112165793 container init 489ee7c1970b6d50f9d81096e251bb2716d9853e23650ec57ae75bc73f557233 (image=quay.io/ceph/ceph:v20, name=kind_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 04:57:16 np0005603787 podman[74349]: 2026-01-31 09:57:16.435439038 +0000 UTC m=+0.116620702 container start 489ee7c1970b6d50f9d81096e251bb2716d9853e23650ec57ae75bc73f557233 (image=quay.io/ceph/ceph:v20, name=kind_hugle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:57:16 np0005603787 podman[74349]: 2026-01-31 09:57:16.340696613 +0000 UTC m=+0.021878287 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:57:16 np0005603787 podman[74349]: 2026-01-31 09:57:16.452600378 +0000 UTC m=+0.133782042 container attach 489ee7c1970b6d50f9d81096e251bb2716d9853e23650ec57ae75bc73f557233 (image=quay.io/ceph/ceph:v20, name=kind_hugle, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:57:16 np0005603787 kind_hugle[74365]: AQD80X1pN5E5GxAAjQ4dQ03cxMMxAt6Epb5h4w==
Jan 31 04:57:16 np0005603787 systemd[1]: libpod-489ee7c1970b6d50f9d81096e251bb2716d9853e23650ec57ae75bc73f557233.scope: Deactivated successfully.
Jan 31 04:57:16 np0005603787 podman[74349]: 2026-01-31 09:57:16.461193717 +0000 UTC m=+0.142375341 container died 489ee7c1970b6d50f9d81096e251bb2716d9853e23650ec57ae75bc73f557233 (image=quay.io/ceph/ceph:v20, name=kind_hugle, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:57:16 np0005603787 podman[74349]: 2026-01-31 09:57:16.535010522 +0000 UTC m=+0.216192136 container remove 489ee7c1970b6d50f9d81096e251bb2716d9853e23650ec57ae75bc73f557233 (image=quay.io/ceph/ceph:v20, name=kind_hugle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:57:16 np0005603787 systemd[1]: libpod-conmon-489ee7c1970b6d50f9d81096e251bb2716d9853e23650ec57ae75bc73f557233.scope: Deactivated successfully.
Jan 31 04:57:16 np0005603787 podman[74387]: 2026-01-31 09:57:16.60891979 +0000 UTC m=+0.053550184 container create c1c3a31d7c839c10f90a63fdd69c5a29dab1274a627640c26a07fb5b850e45c8 (image=quay.io/ceph/ceph:v20, name=festive_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 04:57:16 np0005603787 systemd[1]: Started libpod-conmon-c1c3a31d7c839c10f90a63fdd69c5a29dab1274a627640c26a07fb5b850e45c8.scope.
Jan 31 04:57:16 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:57:16 np0005603787 podman[74387]: 2026-01-31 09:57:16.671885875 +0000 UTC m=+0.116516369 container init c1c3a31d7c839c10f90a63fdd69c5a29dab1274a627640c26a07fb5b850e45c8 (image=quay.io/ceph/ceph:v20, name=festive_engelbart, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 04:57:16 np0005603787 podman[74387]: 2026-01-31 09:57:16.675558433 +0000 UTC m=+0.120188817 container start c1c3a31d7c839c10f90a63fdd69c5a29dab1274a627640c26a07fb5b850e45c8 (image=quay.io/ceph/ceph:v20, name=festive_engelbart, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle)
Jan 31 04:57:16 np0005603787 podman[74387]: 2026-01-31 09:57:16.585481522 +0000 UTC m=+0.030111976 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:57:16 np0005603787 podman[74387]: 2026-01-31 09:57:16.684944645 +0000 UTC m=+0.129575079 container attach c1c3a31d7c839c10f90a63fdd69c5a29dab1274a627640c26a07fb5b850e45c8 (image=quay.io/ceph/ceph:v20, name=festive_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:57:16 np0005603787 festive_engelbart[74404]: AQD80X1pTnAlKRAAeqH9FHeEU0ABDcTiDCcA6g==
Jan 31 04:57:16 np0005603787 systemd[1]: libpod-c1c3a31d7c839c10f90a63fdd69c5a29dab1274a627640c26a07fb5b850e45c8.scope: Deactivated successfully.
Jan 31 04:57:16 np0005603787 podman[74387]: 2026-01-31 09:57:16.692285171 +0000 UTC m=+0.136915565 container died c1c3a31d7c839c10f90a63fdd69c5a29dab1274a627640c26a07fb5b850e45c8 (image=quay.io/ceph/ceph:v20, name=festive_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle)
Jan 31 04:57:16 np0005603787 podman[74387]: 2026-01-31 09:57:16.75803519 +0000 UTC m=+0.202665604 container remove c1c3a31d7c839c10f90a63fdd69c5a29dab1274a627640c26a07fb5b850e45c8 (image=quay.io/ceph/ceph:v20, name=festive_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 04:57:16 np0005603787 systemd[1]: libpod-conmon-c1c3a31d7c839c10f90a63fdd69c5a29dab1274a627640c26a07fb5b850e45c8.scope: Deactivated successfully.
Jan 31 04:57:16 np0005603787 podman[74423]: 2026-01-31 09:57:16.825367911 +0000 UTC m=+0.051952180 container create b421f2dc64d3266c42c32143e04b9f85bb86914b0bd95e7e17947132b86eaadb (image=quay.io/ceph/ceph:v20, name=great_thompson, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:57:16 np0005603787 systemd[1]: Started libpod-conmon-b421f2dc64d3266c42c32143e04b9f85bb86914b0bd95e7e17947132b86eaadb.scope.
Jan 31 04:57:16 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:57:16 np0005603787 podman[74423]: 2026-01-31 09:57:16.877823654 +0000 UTC m=+0.104407953 container init b421f2dc64d3266c42c32143e04b9f85bb86914b0bd95e7e17947132b86eaadb (image=quay.io/ceph/ceph:v20, name=great_thompson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True)
Jan 31 04:57:16 np0005603787 podman[74423]: 2026-01-31 09:57:16.882190931 +0000 UTC m=+0.108775200 container start b421f2dc64d3266c42c32143e04b9f85bb86914b0bd95e7e17947132b86eaadb (image=quay.io/ceph/ceph:v20, name=great_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:57:16 np0005603787 podman[74423]: 2026-01-31 09:57:16.88886891 +0000 UTC m=+0.115453179 container attach b421f2dc64d3266c42c32143e04b9f85bb86914b0bd95e7e17947132b86eaadb (image=quay.io/ceph/ceph:v20, name=great_thompson, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 04:57:16 np0005603787 podman[74423]: 2026-01-31 09:57:16.798596855 +0000 UTC m=+0.025181154 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:57:16 np0005603787 great_thompson[74439]: AQD80X1pMqR6NRAA/mwdDHsk1/6HikyVxlZWhg==
Jan 31 04:57:16 np0005603787 systemd[1]: libpod-b421f2dc64d3266c42c32143e04b9f85bb86914b0bd95e7e17947132b86eaadb.scope: Deactivated successfully.
Jan 31 04:57:16 np0005603787 podman[74423]: 2026-01-31 09:57:16.89930865 +0000 UTC m=+0.125892909 container died b421f2dc64d3266c42c32143e04b9f85bb86914b0bd95e7e17947132b86eaadb (image=quay.io/ceph/ceph:v20, name=great_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:57:16 np0005603787 systemd[1]: var-lib-containers-storage-overlay-086e793ad872255a4bf63934739992543bca1ed42e89732721dff2adec91dc0c-merged.mount: Deactivated successfully.
Jan 31 04:57:16 np0005603787 podman[74423]: 2026-01-31 09:57:16.967784142 +0000 UTC m=+0.194368401 container remove b421f2dc64d3266c42c32143e04b9f85bb86914b0bd95e7e17947132b86eaadb (image=quay.io/ceph/ceph:v20, name=great_thompson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 04:57:16 np0005603787 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 04:57:16 np0005603787 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 04:57:16 np0005603787 systemd[1]: libpod-conmon-b421f2dc64d3266c42c32143e04b9f85bb86914b0bd95e7e17947132b86eaadb.scope: Deactivated successfully.
Jan 31 04:57:17 np0005603787 podman[74458]: 2026-01-31 09:57:17.044026581 +0000 UTC m=+0.061327532 container create 4b89acfa5be77bdd402c959f96ebcfb26c4c44405650c109ca2d95e7e37f162b (image=quay.io/ceph/ceph:v20, name=fervent_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True)
Jan 31 04:57:17 np0005603787 systemd[1]: Started libpod-conmon-4b89acfa5be77bdd402c959f96ebcfb26c4c44405650c109ca2d95e7e37f162b.scope.
Jan 31 04:57:17 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:57:17 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34985e66ef74900e043be35684bf110e1074352f43978fdf9254bb09eecd43a5/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:17 np0005603787 podman[74458]: 2026-01-31 09:57:17.006379124 +0000 UTC m=+0.023680155 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:57:17 np0005603787 podman[74458]: 2026-01-31 09:57:17.107163391 +0000 UTC m=+0.124464402 container init 4b89acfa5be77bdd402c959f96ebcfb26c4c44405650c109ca2d95e7e37f162b (image=quay.io/ceph/ceph:v20, name=fervent_bartik, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 04:57:17 np0005603787 podman[74458]: 2026-01-31 09:57:17.111715572 +0000 UTC m=+0.129016523 container start 4b89acfa5be77bdd402c959f96ebcfb26c4c44405650c109ca2d95e7e37f162b (image=quay.io/ceph/ceph:v20, name=fervent_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:57:17 np0005603787 podman[74458]: 2026-01-31 09:57:17.115131124 +0000 UTC m=+0.132432105 container attach 4b89acfa5be77bdd402c959f96ebcfb26c4c44405650c109ca2d95e7e37f162b (image=quay.io/ceph/ceph:v20, name=fervent_bartik, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:57:17 np0005603787 fervent_bartik[74473]: /usr/bin/monmaptool: monmap file /tmp/monmap
Jan 31 04:57:17 np0005603787 fervent_bartik[74473]: setting min_mon_release = tentacle
Jan 31 04:57:17 np0005603787 fervent_bartik[74473]: /usr/bin/monmaptool: set fsid to 962d77ae-dc67-5de8-89d8-3d1670c67b61
Jan 31 04:57:17 np0005603787 fervent_bartik[74473]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Jan 31 04:57:17 np0005603787 systemd[1]: libpod-4b89acfa5be77bdd402c959f96ebcfb26c4c44405650c109ca2d95e7e37f162b.scope: Deactivated successfully.
Jan 31 04:57:17 np0005603787 podman[74458]: 2026-01-31 09:57:17.143352119 +0000 UTC m=+0.160653070 container died 4b89acfa5be77bdd402c959f96ebcfb26c4c44405650c109ca2d95e7e37f162b (image=quay.io/ceph/ceph:v20, name=fervent_bartik, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 04:57:17 np0005603787 podman[74458]: 2026-01-31 09:57:17.217806141 +0000 UTC m=+0.235107132 container remove 4b89acfa5be77bdd402c959f96ebcfb26c4c44405650c109ca2d95e7e37f162b (image=quay.io/ceph/ceph:v20, name=fervent_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:57:17 np0005603787 systemd[1]: libpod-conmon-4b89acfa5be77bdd402c959f96ebcfb26c4c44405650c109ca2d95e7e37f162b.scope: Deactivated successfully.
Jan 31 04:57:17 np0005603787 podman[74493]: 2026-01-31 09:57:17.297265567 +0000 UTC m=+0.057899531 container create bf45e897b4b9b9351d2b3e40e9646c29ada454b97ac4d6ac78b6196ec573ffe1 (image=quay.io/ceph/ceph:v20, name=recursing_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 04:57:17 np0005603787 systemd[1]: Started libpod-conmon-bf45e897b4b9b9351d2b3e40e9646c29ada454b97ac4d6ac78b6196ec573ffe1.scope.
Jan 31 04:57:17 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:57:17 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d16b011e5611860f5c66b0b0576161e3eecb9e2df02f31799b75de7ebd76cf20/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:17 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d16b011e5611860f5c66b0b0576161e3eecb9e2df02f31799b75de7ebd76cf20/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:17 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d16b011e5611860f5c66b0b0576161e3eecb9e2df02f31799b75de7ebd76cf20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:17 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d16b011e5611860f5c66b0b0576161e3eecb9e2df02f31799b75de7ebd76cf20/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:17 np0005603787 podman[74493]: 2026-01-31 09:57:17.266927945 +0000 UTC m=+0.027561949 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:57:17 np0005603787 podman[74493]: 2026-01-31 09:57:17.374464753 +0000 UTC m=+0.135098767 container init bf45e897b4b9b9351d2b3e40e9646c29ada454b97ac4d6ac78b6196ec573ffe1 (image=quay.io/ceph/ceph:v20, name=recursing_dijkstra, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 04:57:17 np0005603787 podman[74493]: 2026-01-31 09:57:17.381347227 +0000 UTC m=+0.141981201 container start bf45e897b4b9b9351d2b3e40e9646c29ada454b97ac4d6ac78b6196ec573ffe1 (image=quay.io/ceph/ceph:v20, name=recursing_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:57:17 np0005603787 podman[74493]: 2026-01-31 09:57:17.390982725 +0000 UTC m=+0.151616689 container attach bf45e897b4b9b9351d2b3e40e9646c29ada454b97ac4d6ac78b6196ec573ffe1 (image=quay.io/ceph/ceph:v20, name=recursing_dijkstra, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 04:57:17 np0005603787 systemd[1]: libpod-bf45e897b4b9b9351d2b3e40e9646c29ada454b97ac4d6ac78b6196ec573ffe1.scope: Deactivated successfully.
Jan 31 04:57:17 np0005603787 podman[74536]: 2026-01-31 09:57:17.570324583 +0000 UTC m=+0.033556309 container died bf45e897b4b9b9351d2b3e40e9646c29ada454b97ac4d6ac78b6196ec573ffe1 (image=quay.io/ceph/ceph:v20, name=recursing_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:57:17 np0005603787 podman[74536]: 2026-01-31 09:57:17.643310276 +0000 UTC m=+0.106541972 container remove bf45e897b4b9b9351d2b3e40e9646c29ada454b97ac4d6ac78b6196ec573ffe1 (image=quay.io/ceph/ceph:v20, name=recursing_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:57:17 np0005603787 systemd[1]: libpod-conmon-bf45e897b4b9b9351d2b3e40e9646c29ada454b97ac4d6ac78b6196ec573ffe1.scope: Deactivated successfully.
Jan 31 04:57:17 np0005603787 systemd[1]: Reloading.
Jan 31 04:57:17 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:57:17 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 04:57:17 np0005603787 systemd[1]: Reloading.
Jan 31 04:57:17 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:57:17 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 04:57:18 np0005603787 systemd[1]: Reached target All Ceph clusters and services.
Jan 31 04:57:18 np0005603787 systemd[1]: Reloading.
Jan 31 04:57:18 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 04:57:18 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:57:18 np0005603787 systemd[1]: Reached target Ceph cluster 962d77ae-dc67-5de8-89d8-3d1670c67b61.
Jan 31 04:57:18 np0005603787 systemd[1]: Reloading.
Jan 31 04:57:18 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:57:18 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 04:57:18 np0005603787 systemd[1]: Reloading.
Jan 31 04:57:18 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 04:57:18 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:57:18 np0005603787 systemd[1]: Created slice Slice /system/ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61.
Jan 31 04:57:18 np0005603787 systemd[1]: Reached target System Time Set.
Jan 31 04:57:18 np0005603787 systemd[1]: Reached target System Time Synchronized.
Jan 31 04:57:18 np0005603787 systemd[1]: Starting Ceph mon.compute-0 for 962d77ae-dc67-5de8-89d8-3d1670c67b61...
Jan 31 04:57:18 np0005603787 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 04:57:18 np0005603787 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 04:57:18 np0005603787 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 04:57:18 np0005603787 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 04:57:19 np0005603787 podman[74787]: 2026-01-31 09:57:19.013153687 +0000 UTC m=+0.056291777 container create 50fc5a9b0ccd57109b4d0db8b85f936a9245ee82ccfb8c49b2f1e6f18c4d6ddd (image=quay.io/ceph/ceph:v20, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mon-compute-0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 04:57:19 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c20910b3cdc3f88631016de828d0ee545eda7834523d24bfef41e93f94cc932c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:19 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c20910b3cdc3f88631016de828d0ee545eda7834523d24bfef41e93f94cc932c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:19 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c20910b3cdc3f88631016de828d0ee545eda7834523d24bfef41e93f94cc932c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:19 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c20910b3cdc3f88631016de828d0ee545eda7834523d24bfef41e93f94cc932c/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:19 np0005603787 podman[74787]: 2026-01-31 09:57:19.073538922 +0000 UTC m=+0.116677012 container init 50fc5a9b0ccd57109b4d0db8b85f936a9245ee82ccfb8c49b2f1e6f18c4d6ddd (image=quay.io/ceph/ceph:v20, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 04:57:19 np0005603787 podman[74787]: 2026-01-31 09:57:18.978968863 +0000 UTC m=+0.022107003 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:57:19 np0005603787 podman[74787]: 2026-01-31 09:57:19.077746565 +0000 UTC m=+0.120884635 container start 50fc5a9b0ccd57109b4d0db8b85f936a9245ee82ccfb8c49b2f1e6f18c4d6ddd (image=quay.io/ceph/ceph:v20, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 04:57:19 np0005603787 bash[74787]: 50fc5a9b0ccd57109b4d0db8b85f936a9245ee82ccfb8c49b2f1e6f18c4d6ddd
Jan 31 04:57:19 np0005603787 systemd[1]: Started Ceph mon.compute-0 for 962d77ae-dc67-5de8-89d8-3d1670c67b61.
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: pidfile_write: ignore empty --pid-file
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: load: jerasure load: lrc 
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: RocksDB version: 7.9.2
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: Git sha 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: DB SUMMARY
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: DB Session ID:  CGQ3FBJF187VKK8HPY09
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: CURRENT file:  CURRENT
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                         Options.error_if_exists: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                       Options.create_if_missing: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                                     Options.env: 0x5601628c3440
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                                Options.info_log: 0x5601637433e0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                              Options.statistics: (nil)
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                               Options.use_fsync: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                              Options.db_log_dir: 
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                                 Options.wal_dir: 
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                    Options.write_buffer_manager: 0x5601636c2140
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                  Options.unordered_write: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                               Options.row_cache: None
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                              Options.wal_filter: None
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:             Options.two_write_queues: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:             Options.wal_compression: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:             Options.atomic_flush: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:             Options.max_background_jobs: 2
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:             Options.max_background_compactions: -1
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:             Options.max_subcompactions: 1
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:             Options.max_total_wal_size: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                          Options.max_open_files: -1
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:       Options.compaction_readahead_size: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: Compression algorithms supported:
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: #011kZSTD supported: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: #011kXpressCompression supported: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: #011kBZip2Compression supported: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: #011kLZ4Compression supported: 1
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: #011kZlibCompression supported: 1
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: #011kSnappyCompression supported: 1
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:           Options.merge_operator: 
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:        Options.compaction_filter: None
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5601636ce600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5601636b38d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:        Options.write_buffer_size: 33554432
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:  Options.max_write_buffer_number: 2
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:          Options.compression: NoCompression
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:             Options.num_levels: 7
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a9067dea-12fb-43c2-8d5c-dbf66227f0e8
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769853439116712, "job": 1, "event": "recovery_started", "wal_files": [4]}
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769853439120253, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769853439, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a9067dea-12fb-43c2-8d5c-dbf66227f0e8", "db_session_id": "CGQ3FBJF187VKK8HPY09", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769853439120358, "job": 1, "event": "recovery_finished"}
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5601636e0e00
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: DB pointer 0x56016382c000
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.08 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.08 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5601636b38d0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: mon.compute-0@-1(???) e0 preinit fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: mon.compute-0@0(probing) e0 win_standalone_election
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Jan 31 04:57:19 np0005603787 podman[74809]: 2026-01-31 09:57:19.165212546 +0000 UTC m=+0.046178357 container create 23fe1e87a3c2339dae6a1152ccef11dd35a29f3c4028feac06cab10cb76d85e3 (image=quay.io/ceph/ceph:v20, name=condescending_chebyshev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: paxos.0).electionLogic(2) init, last seen epoch 2
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: log_channel(cluster) log [DBG] : monmap epoch 1
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: log_channel(cluster) log [DBG] : fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: log_channel(cluster) log [DBG] : last_changed 2026-01-31T09:57:17.140831+0000
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: log_channel(cluster) log [DBG] : created 2026-01-31T09:57:17.140831+0000
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,ceph_version_when_created=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v20,cpu=AMD EPYC-Rome Processor,created_at=2026-01-31T09:57:17.437686Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026,kernel_version=5.14.0-665.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864292,os=Linux}
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout,17=tentacle ondisk layout}
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: mon.compute-0@0(leader).mds e1 new map
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: mon.compute-0@0(leader).mds e1 print_map#012e1#012btime 2026-01-31T09:57:19:176309+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: log_channel(cluster) log [DBG] : fsmap 
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: mkfs 962d77ae-dc67-5de8-89d8-3d1670c67b61
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 04:57:19 np0005603787 systemd[1]: Started libpod-conmon-23fe1e87a3c2339dae6a1152ccef11dd35a29f3c4028feac06cab10cb76d85e3.scope.
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 04:57:19 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:57:19 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee89320db5d9ce627ef224a18d51991c8d9d9ff40afb892e49eadb0f2c9902ed/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:19 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee89320db5d9ce627ef224a18d51991c8d9d9ff40afb892e49eadb0f2c9902ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:19 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee89320db5d9ce627ef224a18d51991c8d9d9ff40afb892e49eadb0f2c9902ed/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:19 np0005603787 podman[74809]: 2026-01-31 09:57:19.144357858 +0000 UTC m=+0.025323669 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:57:19 np0005603787 podman[74809]: 2026-01-31 09:57:19.251004571 +0000 UTC m=+0.131970392 container init 23fe1e87a3c2339dae6a1152ccef11dd35a29f3c4028feac06cab10cb76d85e3 (image=quay.io/ceph/ceph:v20, name=condescending_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:57:19 np0005603787 podman[74809]: 2026-01-31 09:57:19.254977367 +0000 UTC m=+0.135943178 container start 23fe1e87a3c2339dae6a1152ccef11dd35a29f3c4028feac06cab10cb76d85e3 (image=quay.io/ceph/ceph:v20, name=condescending_chebyshev, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:57:19 np0005603787 podman[74809]: 2026-01-31 09:57:19.263124746 +0000 UTC m=+0.144090557 container attach 23fe1e87a3c2339dae6a1152ccef11dd35a29f3c4028feac06cab10cb76d85e3 (image=quay.io/ceph/ceph:v20, name=condescending_chebyshev, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1342635142' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 31 04:57:19 np0005603787 condescending_chebyshev[74863]:  cluster:
Jan 31 04:57:19 np0005603787 condescending_chebyshev[74863]:    id:     962d77ae-dc67-5de8-89d8-3d1670c67b61
Jan 31 04:57:19 np0005603787 condescending_chebyshev[74863]:    health: HEALTH_OK
Jan 31 04:57:19 np0005603787 condescending_chebyshev[74863]: 
Jan 31 04:57:19 np0005603787 condescending_chebyshev[74863]:  services:
Jan 31 04:57:19 np0005603787 condescending_chebyshev[74863]:    mon: 1 daemons, quorum compute-0 (age 0.271691s) [leader: compute-0]
Jan 31 04:57:19 np0005603787 condescending_chebyshev[74863]:    mgr: no daemons active
Jan 31 04:57:19 np0005603787 condescending_chebyshev[74863]:    osd: 0 osds: 0 up, 0 in
Jan 31 04:57:19 np0005603787 condescending_chebyshev[74863]: 
Jan 31 04:57:19 np0005603787 condescending_chebyshev[74863]:  data:
Jan 31 04:57:19 np0005603787 condescending_chebyshev[74863]:    pools:   0 pools, 0 pgs
Jan 31 04:57:19 np0005603787 condescending_chebyshev[74863]:    objects: 0 objects, 0 B
Jan 31 04:57:19 np0005603787 condescending_chebyshev[74863]:    usage:   0 B used, 0 B / 0 B avail
Jan 31 04:57:19 np0005603787 condescending_chebyshev[74863]:    pgs:     
Jan 31 04:57:19 np0005603787 condescending_chebyshev[74863]: 
Jan 31 04:57:19 np0005603787 systemd[1]: libpod-23fe1e87a3c2339dae6a1152ccef11dd35a29f3c4028feac06cab10cb76d85e3.scope: Deactivated successfully.
Jan 31 04:57:19 np0005603787 podman[74809]: 2026-01-31 09:57:19.457565818 +0000 UTC m=+0.338531659 container died 23fe1e87a3c2339dae6a1152ccef11dd35a29f3c4028feac06cab10cb76d85e3 (image=quay.io/ceph/ceph:v20, name=condescending_chebyshev, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:57:19 np0005603787 podman[74809]: 2026-01-31 09:57:19.628652386 +0000 UTC m=+0.509618237 container remove 23fe1e87a3c2339dae6a1152ccef11dd35a29f3c4028feac06cab10cb76d85e3 (image=quay.io/ceph/ceph:v20, name=condescending_chebyshev, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 04:57:19 np0005603787 systemd[1]: libpod-conmon-23fe1e87a3c2339dae6a1152ccef11dd35a29f3c4028feac06cab10cb76d85e3.scope: Deactivated successfully.
Jan 31 04:57:19 np0005603787 podman[74900]: 2026-01-31 09:57:19.705469531 +0000 UTC m=+0.055766503 container create de6b2025a607b53ede66c888c6366bc72f14ab0996748a2aa492376d248ce66f (image=quay.io/ceph/ceph:v20, name=strange_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:57:19 np0005603787 systemd[1]: Started libpod-conmon-de6b2025a607b53ede66c888c6366bc72f14ab0996748a2aa492376d248ce66f.scope.
Jan 31 04:57:19 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:57:19 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/135c74274569269a0199e6414a0367bb6568e3c21b30bec15f962522989bcd3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:19 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/135c74274569269a0199e6414a0367bb6568e3c21b30bec15f962522989bcd3f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:19 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/135c74274569269a0199e6414a0367bb6568e3c21b30bec15f962522989bcd3f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:19 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/135c74274569269a0199e6414a0367bb6568e3c21b30bec15f962522989bcd3f/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:19 np0005603787 podman[74900]: 2026-01-31 09:57:19.680298738 +0000 UTC m=+0.030595740 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:57:19 np0005603787 podman[74900]: 2026-01-31 09:57:19.783865539 +0000 UTC m=+0.134162521 container init de6b2025a607b53ede66c888c6366bc72f14ab0996748a2aa492376d248ce66f (image=quay.io/ceph/ceph:v20, name=strange_bell, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:57:19 np0005603787 podman[74900]: 2026-01-31 09:57:19.788281947 +0000 UTC m=+0.138578959 container start de6b2025a607b53ede66c888c6366bc72f14ab0996748a2aa492376d248ce66f (image=quay.io/ceph/ceph:v20, name=strange_bell, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:57:19 np0005603787 podman[74900]: 2026-01-31 09:57:19.794976446 +0000 UTC m=+0.145273428 container attach de6b2025a607b53ede66c888c6366bc72f14ab0996748a2aa492376d248ce66f (image=quay.io/ceph/ceph:v20, name=strange_bell, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2468804053' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 31 04:57:19 np0005603787 ceph-mon[74808]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2468804053' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 31 04:57:19 np0005603787 strange_bell[74917]: 
Jan 31 04:57:19 np0005603787 strange_bell[74917]: [global]
Jan 31 04:57:19 np0005603787 strange_bell[74917]: #011fsid = 962d77ae-dc67-5de8-89d8-3d1670c67b61
Jan 31 04:57:19 np0005603787 strange_bell[74917]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 31 04:57:19 np0005603787 strange_bell[74917]: #011osd_crush_chooseleaf_type = 0
Jan 31 04:57:19 np0005603787 systemd[1]: libpod-de6b2025a607b53ede66c888c6366bc72f14ab0996748a2aa492376d248ce66f.scope: Deactivated successfully.
Jan 31 04:57:19 np0005603787 conmon[74917]: conmon de6b2025a607b53ede66 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-de6b2025a607b53ede66c888c6366bc72f14ab0996748a2aa492376d248ce66f.scope/container/memory.events
Jan 31 04:57:19 np0005603787 podman[74900]: 2026-01-31 09:57:19.999113498 +0000 UTC m=+0.349410490 container died de6b2025a607b53ede66c888c6366bc72f14ab0996748a2aa492376d248ce66f (image=quay.io/ceph/ceph:v20, name=strange_bell, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 04:57:20 np0005603787 systemd[1]: var-lib-containers-storage-overlay-135c74274569269a0199e6414a0367bb6568e3c21b30bec15f962522989bcd3f-merged.mount: Deactivated successfully.
Jan 31 04:57:20 np0005603787 podman[74900]: 2026-01-31 09:57:20.063669165 +0000 UTC m=+0.413966137 container remove de6b2025a607b53ede66c888c6366bc72f14ab0996748a2aa492376d248ce66f (image=quay.io/ceph/ceph:v20, name=strange_bell, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 04:57:20 np0005603787 systemd[1]: libpod-conmon-de6b2025a607b53ede66c888c6366bc72f14ab0996748a2aa492376d248ce66f.scope: Deactivated successfully.
Jan 31 04:57:20 np0005603787 podman[74953]: 2026-01-31 09:57:20.116466257 +0000 UTC m=+0.038772998 container create b74d763084881ec5fed26c99ed401bfa60a6815035b43c4ac7c651dc1c00c679 (image=quay.io/ceph/ceph:v20, name=angry_ganguly, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 04:57:20 np0005603787 systemd[1]: Started libpod-conmon-b74d763084881ec5fed26c99ed401bfa60a6815035b43c4ac7c651dc1c00c679.scope.
Jan 31 04:57:20 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:57:20 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8709277a50849fa280672f67b62ca19c745c744d8ac9ab44c4b3c3a86411f3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:20 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8709277a50849fa280672f67b62ca19c745c744d8ac9ab44c4b3c3a86411f3b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:20 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8709277a50849fa280672f67b62ca19c745c744d8ac9ab44c4b3c3a86411f3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:20 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8709277a50849fa280672f67b62ca19c745c744d8ac9ab44c4b3c3a86411f3b/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:20 np0005603787 podman[74953]: 2026-01-31 09:57:20.179990507 +0000 UTC m=+0.102297278 container init b74d763084881ec5fed26c99ed401bfa60a6815035b43c4ac7c651dc1c00c679 (image=quay.io/ceph/ceph:v20, name=angry_ganguly, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:57:20 np0005603787 podman[74953]: 2026-01-31 09:57:20.1875846 +0000 UTC m=+0.109891331 container start b74d763084881ec5fed26c99ed401bfa60a6815035b43c4ac7c651dc1c00c679 (image=quay.io/ceph/ceph:v20, name=angry_ganguly, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:57:20 np0005603787 podman[74953]: 2026-01-31 09:57:20.095577798 +0000 UTC m=+0.017884519 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:57:20 np0005603787 podman[74953]: 2026-01-31 09:57:20.194900656 +0000 UTC m=+0.117207397 container attach b74d763084881ec5fed26c99ed401bfa60a6815035b43c4ac7c651dc1c00c679 (image=quay.io/ceph/ceph:v20, name=angry_ganguly, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 04:57:20 np0005603787 ceph-mon[74808]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 04:57:20 np0005603787 ceph-mon[74808]: from='client.? 192.168.122.100:0/2468804053' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 31 04:57:20 np0005603787 ceph-mon[74808]: from='client.? 192.168.122.100:0/2468804053' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 31 04:57:20 np0005603787 ceph-mon[74808]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 04:57:20 np0005603787 ceph-mon[74808]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3586025663' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 04:57:20 np0005603787 systemd[1]: libpod-b74d763084881ec5fed26c99ed401bfa60a6815035b43c4ac7c651dc1c00c679.scope: Deactivated successfully.
Jan 31 04:57:20 np0005603787 conmon[74970]: conmon b74d763084881ec5fed2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b74d763084881ec5fed26c99ed401bfa60a6815035b43c4ac7c651dc1c00c679.scope/container/memory.events
Jan 31 04:57:20 np0005603787 podman[74996]: 2026-01-31 09:57:20.418436727 +0000 UTC m=+0.020262823 container died b74d763084881ec5fed26c99ed401bfa60a6815035b43c4ac7c651dc1c00c679 (image=quay.io/ceph/ceph:v20, name=angry_ganguly, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 04:57:20 np0005603787 systemd[1]: var-lib-containers-storage-overlay-f8709277a50849fa280672f67b62ca19c745c744d8ac9ab44c4b3c3a86411f3b-merged.mount: Deactivated successfully.
Jan 31 04:57:20 np0005603787 podman[74996]: 2026-01-31 09:57:20.503425221 +0000 UTC m=+0.105251287 container remove b74d763084881ec5fed26c99ed401bfa60a6815035b43c4ac7c651dc1c00c679 (image=quay.io/ceph/ceph:v20, name=angry_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 04:57:20 np0005603787 systemd[1]: libpod-conmon-b74d763084881ec5fed26c99ed401bfa60a6815035b43c4ac7c651dc1c00c679.scope: Deactivated successfully.
Jan 31 04:57:20 np0005603787 systemd[1]: Stopping Ceph mon.compute-0 for 962d77ae-dc67-5de8-89d8-3d1670c67b61...
Jan 31 04:57:20 np0005603787 ceph-mon[74808]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 31 04:57:20 np0005603787 ceph-mon[74808]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 31 04:57:20 np0005603787 ceph-mon[74808]: mon.compute-0@0(leader) e1 shutdown
Jan 31 04:57:20 np0005603787 ceph-mon[74808]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 31 04:57:20 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mon-compute-0[74804]: 2026-01-31T09:57:20.670+0000 7f47f3e3c640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 31 04:57:20 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mon-compute-0[74804]: 2026-01-31T09:57:20.670+0000 7f47f3e3c640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 31 04:57:20 np0005603787 ceph-mon[74808]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 31 04:57:20 np0005603787 podman[75038]: 2026-01-31 09:57:20.772973853 +0000 UTC m=+0.136274497 container died 50fc5a9b0ccd57109b4d0db8b85f936a9245ee82ccfb8c49b2f1e6f18c4d6ddd (image=quay.io/ceph/ceph:v20, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mon-compute-0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2)
Jan 31 04:57:20 np0005603787 systemd[1]: var-lib-containers-storage-overlay-c20910b3cdc3f88631016de828d0ee545eda7834523d24bfef41e93f94cc932c-merged.mount: Deactivated successfully.
Jan 31 04:57:20 np0005603787 podman[75038]: 2026-01-31 09:57:20.822673603 +0000 UTC m=+0.185974227 container remove 50fc5a9b0ccd57109b4d0db8b85f936a9245ee82ccfb8c49b2f1e6f18c4d6ddd (image=quay.io/ceph/ceph:v20, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mon-compute-0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 04:57:20 np0005603787 bash[75038]: ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mon-compute-0
Jan 31 04:57:20 np0005603787 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 04:57:20 np0005603787 systemd[1]: ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61@mon.compute-0.service: Deactivated successfully.
Jan 31 04:57:20 np0005603787 systemd[1]: Stopped Ceph mon.compute-0 for 962d77ae-dc67-5de8-89d8-3d1670c67b61.
Jan 31 04:57:20 np0005603787 systemd[1]: Starting Ceph mon.compute-0 for 962d77ae-dc67-5de8-89d8-3d1670c67b61...
Jan 31 04:57:20 np0005603787 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 04:57:21 np0005603787 podman[75140]: 2026-01-31 09:57:21.093013526 +0000 UTC m=+0.034436932 container create 1cb6a2ad0c52f65a03512fc45c5f9abf84541c639633c47899a99e7122aa7891 (image=quay.io/ceph/ceph:v20, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 04:57:21 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f6815213533265f0444dd3aebaf4b4c3cf67d86aa0ae2534fa4ad967f03412f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:21 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f6815213533265f0444dd3aebaf4b4c3cf67d86aa0ae2534fa4ad967f03412f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:21 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f6815213533265f0444dd3aebaf4b4c3cf67d86aa0ae2534fa4ad967f03412f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:21 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f6815213533265f0444dd3aebaf4b4c3cf67d86aa0ae2534fa4ad967f03412f/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:21 np0005603787 podman[75140]: 2026-01-31 09:57:21.153822103 +0000 UTC m=+0.095245529 container init 1cb6a2ad0c52f65a03512fc45c5f9abf84541c639633c47899a99e7122aa7891 (image=quay.io/ceph/ceph:v20, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:57:21 np0005603787 podman[75140]: 2026-01-31 09:57:21.157948543 +0000 UTC m=+0.099371949 container start 1cb6a2ad0c52f65a03512fc45c5f9abf84541c639633c47899a99e7122aa7891 (image=quay.io/ceph/ceph:v20, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mon-compute-0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 04:57:21 np0005603787 bash[75140]: 1cb6a2ad0c52f65a03512fc45c5f9abf84541c639633c47899a99e7122aa7891
Jan 31 04:57:21 np0005603787 podman[75140]: 2026-01-31 09:57:21.078822037 +0000 UTC m=+0.020245463 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:57:21 np0005603787 systemd[1]: Started Ceph mon.compute-0 for 962d77ae-dc67-5de8-89d8-3d1670c67b61.
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: pidfile_write: ignore empty --pid-file
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: load: jerasure load: lrc 
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: RocksDB version: 7.9.2
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: Git sha 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: DB SUMMARY
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: DB Session ID:  EXKALWXQ4I64EWVUMKE5
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: CURRENT file:  CURRENT
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 60239 ; 
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                         Options.error_if_exists: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                       Options.create_if_missing: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                                     Options.env: 0x55b1fc556440
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                                Options.info_log: 0x55b1fd3ede80
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                              Options.statistics: (nil)
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                               Options.use_fsync: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                              Options.db_log_dir: 
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                                 Options.wal_dir: 
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                    Options.write_buffer_manager: 0x55b1fd438140
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                  Options.unordered_write: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                               Options.row_cache: None
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                              Options.wal_filter: None
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:             Options.two_write_queues: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:             Options.wal_compression: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:             Options.atomic_flush: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:             Options.max_background_jobs: 2
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:             Options.max_background_compactions: -1
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:             Options.max_subcompactions: 1
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:             Options.max_total_wal_size: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                          Options.max_open_files: -1
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:       Options.compaction_readahead_size: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: Compression algorithms supported:
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: #011kZSTD supported: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: #011kXpressCompression supported: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: #011kBZip2Compression supported: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: #011kLZ4Compression supported: 1
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: #011kZlibCompression supported: 1
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: #011kSnappyCompression supported: 1
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:           Options.merge_operator: 
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:        Options.compaction_filter: None
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b1fd444a00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b1fd4298d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:        Options.write_buffer_size: 33554432
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:  Options.max_write_buffer_number: 2
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:          Options.compression: NoCompression
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:             Options.num_levels: 7
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a9067dea-12fb-43c2-8d5c-dbf66227f0e8
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769853441198388, "job": 1, "event": "recovery_started", "wal_files": [9]}
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769853441202191, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 59960, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 143, "table_properties": {"data_size": 58438, "index_size": 164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3403, "raw_average_key_size": 30, "raw_value_size": 55790, "raw_average_value_size": 507, "num_data_blocks": 9, "num_entries": 110, "num_filter_entries": 110, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769853441, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a9067dea-12fb-43c2-8d5c-dbf66227f0e8", "db_session_id": "EXKALWXQ4I64EWVUMKE5", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769853441202295, "job": 1, "event": "recovery_finished"}
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55b1fd456e00
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: DB pointer 0x55b1fd5a0000
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   60.45 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     15.9      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0   60.45 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     15.9      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     15.9      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     15.9      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 3.98 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 3.98 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b1fd4298d0#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: mon.compute-0@-1(???) e1 preinit fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: mon.compute-0@-1(???).mds e1 new map
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: mon.compute-0@-1(???).mds e1 print_map#012e1#012btime 2026-01-31T09:57:19:176309+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : monmap epoch 1
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : last_changed 2026-01-31T09:57:17.140831+0000
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : created 2026-01-31T09:57:17.140831+0000
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Jan 31 04:57:21 np0005603787 podman[75161]: 2026-01-31 09:57:21.224373131 +0000 UTC m=+0.042995542 container create 601083e3fb3628a040c6e43d8a5b93057f02ca461549ab37125c03ab58d3c034 (image=quay.io/ceph/ceph:v20, name=vibrant_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : fsmap 
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 31 04:57:21 np0005603787 systemd[1]: Started libpod-conmon-601083e3fb3628a040c6e43d8a5b93057f02ca461549ab37125c03ab58d3c034.scope.
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 04:57:21 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:57:21 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d2caa9ad64cafb440c0b6aaec330066b16950453a3926def323d5846a2d3e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:21 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d2caa9ad64cafb440c0b6aaec330066b16950453a3926def323d5846a2d3e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:21 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d2caa9ad64cafb440c0b6aaec330066b16950453a3926def323d5846a2d3e5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:21 np0005603787 podman[75161]: 2026-01-31 09:57:21.203633406 +0000 UTC m=+0.022255837 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:57:21 np0005603787 podman[75161]: 2026-01-31 09:57:21.313218118 +0000 UTC m=+0.131840549 container init 601083e3fb3628a040c6e43d8a5b93057f02ca461549ab37125c03ab58d3c034 (image=quay.io/ceph/ceph:v20, name=vibrant_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 04:57:21 np0005603787 podman[75161]: 2026-01-31 09:57:21.323058421 +0000 UTC m=+0.141680832 container start 601083e3fb3628a040c6e43d8a5b93057f02ca461549ab37125c03ab58d3c034 (image=quay.io/ceph/ceph:v20, name=vibrant_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 04:57:21 np0005603787 podman[75161]: 2026-01-31 09:57:21.327185442 +0000 UTC m=+0.145807953 container attach 601083e3fb3628a040c6e43d8a5b93057f02ca461549ab37125c03ab58d3c034 (image=quay.io/ceph/ceph:v20, name=vibrant_khorana, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Jan 31 04:57:21 np0005603787 systemd[1]: libpod-601083e3fb3628a040c6e43d8a5b93057f02ca461549ab37125c03ab58d3c034.scope: Deactivated successfully.
Jan 31 04:57:21 np0005603787 podman[75161]: 2026-01-31 09:57:21.554650398 +0000 UTC m=+0.373272809 container died 601083e3fb3628a040c6e43d8a5b93057f02ca461549ab37125c03ab58d3c034 (image=quay.io/ceph/ceph:v20, name=vibrant_khorana, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:57:21 np0005603787 podman[75161]: 2026-01-31 09:57:21.609994748 +0000 UTC m=+0.428617159 container remove 601083e3fb3628a040c6e43d8a5b93057f02ca461549ab37125c03ab58d3c034 (image=quay.io/ceph/ceph:v20, name=vibrant_khorana, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:57:21 np0005603787 systemd[1]: libpod-conmon-601083e3fb3628a040c6e43d8a5b93057f02ca461549ab37125c03ab58d3c034.scope: Deactivated successfully.
Jan 31 04:57:21 np0005603787 podman[75253]: 2026-01-31 09:57:21.667513907 +0000 UTC m=+0.039624191 container create 8eacd91765f9b4cdffdf41247d4cfdb41293ab8653811c6d72dfb4a47e8c414b (image=quay.io/ceph/ceph:v20, name=distracted_snyder, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 04:57:21 np0005603787 systemd[1]: Started libpod-conmon-8eacd91765f9b4cdffdf41247d4cfdb41293ab8653811c6d72dfb4a47e8c414b.scope.
Jan 31 04:57:21 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:57:21 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93e6d851a0d36397f237ce7a07dbc1546d2012c2b14f3283bf727533e5ccc6f7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:21 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93e6d851a0d36397f237ce7a07dbc1546d2012c2b14f3283bf727533e5ccc6f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:21 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93e6d851a0d36397f237ce7a07dbc1546d2012c2b14f3283bf727533e5ccc6f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:21 np0005603787 podman[75253]: 2026-01-31 09:57:21.736905894 +0000 UTC m=+0.109016178 container init 8eacd91765f9b4cdffdf41247d4cfdb41293ab8653811c6d72dfb4a47e8c414b (image=quay.io/ceph/ceph:v20, name=distracted_snyder, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 04:57:21 np0005603787 podman[75253]: 2026-01-31 09:57:21.741003514 +0000 UTC m=+0.113113798 container start 8eacd91765f9b4cdffdf41247d4cfdb41293ab8653811c6d72dfb4a47e8c414b (image=quay.io/ceph/ceph:v20, name=distracted_snyder, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:57:21 np0005603787 podman[75253]: 2026-01-31 09:57:21.648464638 +0000 UTC m=+0.020574942 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:57:21 np0005603787 podman[75253]: 2026-01-31 09:57:21.745819353 +0000 UTC m=+0.117929817 container attach 8eacd91765f9b4cdffdf41247d4cfdb41293ab8653811c6d72dfb4a47e8c414b (image=quay.io/ceph/ceph:v20, name=distracted_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:57:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Jan 31 04:57:21 np0005603787 systemd[1]: libpod-8eacd91765f9b4cdffdf41247d4cfdb41293ab8653811c6d72dfb4a47e8c414b.scope: Deactivated successfully.
Jan 31 04:57:21 np0005603787 podman[75253]: 2026-01-31 09:57:21.934170692 +0000 UTC m=+0.306280976 container died 8eacd91765f9b4cdffdf41247d4cfdb41293ab8653811c6d72dfb4a47e8c414b (image=quay.io/ceph/ceph:v20, name=distracted_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:57:21 np0005603787 systemd[1]: var-lib-containers-storage-overlay-93e6d851a0d36397f237ce7a07dbc1546d2012c2b14f3283bf727533e5ccc6f7-merged.mount: Deactivated successfully.
Jan 31 04:57:21 np0005603787 podman[75253]: 2026-01-31 09:57:21.967684679 +0000 UTC m=+0.339794963 container remove 8eacd91765f9b4cdffdf41247d4cfdb41293ab8653811c6d72dfb4a47e8c414b (image=quay.io/ceph/ceph:v20, name=distracted_snyder, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:57:21 np0005603787 systemd[1]: libpod-conmon-8eacd91765f9b4cdffdf41247d4cfdb41293ab8653811c6d72dfb4a47e8c414b.scope: Deactivated successfully.
Jan 31 04:57:22 np0005603787 systemd[1]: Reloading.
Jan 31 04:57:22 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:57:22 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 04:57:22 np0005603787 systemd[1]: Reloading.
Jan 31 04:57:22 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:57:22 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 04:57:22 np0005603787 systemd[1]: Starting Ceph mgr.compute-0.mdmqaq for 962d77ae-dc67-5de8-89d8-3d1670c67b61...
Jan 31 04:57:22 np0005603787 podman[75433]: 2026-01-31 09:57:22.676095883 +0000 UTC m=+0.048197740 container create c0327d95fd7f2355a56225e86a66c1ed7727e088b1f56947c36a38a93eea7cde (image=quay.io/ceph/ceph:v20, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mgr-compute-0-mdmqaq, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 04:57:22 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33e1846cf5b39642e577d4fdeb4739cff15ec6596d12d8229b16beda19275c74/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:22 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33e1846cf5b39642e577d4fdeb4739cff15ec6596d12d8229b16beda19275c74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:22 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33e1846cf5b39642e577d4fdeb4739cff15ec6596d12d8229b16beda19275c74/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:22 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33e1846cf5b39642e577d4fdeb4739cff15ec6596d12d8229b16beda19275c74/merged/var/lib/ceph/mgr/ceph-compute-0.mdmqaq supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:22 np0005603787 podman[75433]: 2026-01-31 09:57:22.653074217 +0000 UTC m=+0.025176134 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:57:22 np0005603787 podman[75433]: 2026-01-31 09:57:22.754070709 +0000 UTC m=+0.126172586 container init c0327d95fd7f2355a56225e86a66c1ed7727e088b1f56947c36a38a93eea7cde (image=quay.io/ceph/ceph:v20, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mgr-compute-0-mdmqaq, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 04:57:22 np0005603787 podman[75433]: 2026-01-31 09:57:22.760649186 +0000 UTC m=+0.132751043 container start c0327d95fd7f2355a56225e86a66c1ed7727e088b1f56947c36a38a93eea7cde (image=quay.io/ceph/ceph:v20, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mgr-compute-0-mdmqaq, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 04:57:22 np0005603787 bash[75433]: c0327d95fd7f2355a56225e86a66c1ed7727e088b1f56947c36a38a93eea7cde
Jan 31 04:57:22 np0005603787 systemd[1]: Started Ceph mgr.compute-0.mdmqaq for 962d77ae-dc67-5de8-89d8-3d1670c67b61.
Jan 31 04:57:22 np0005603787 ceph-mgr[75453]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 04:57:22 np0005603787 ceph-mgr[75453]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 31 04:57:22 np0005603787 ceph-mgr[75453]: pidfile_write: ignore empty --pid-file
Jan 31 04:57:22 np0005603787 podman[75454]: 2026-01-31 09:57:22.835131588 +0000 UTC m=+0.042576329 container create 7c12def71fc955a9fa27aa3e757d944ff34034e79c08908253ec41439dde2bee (image=quay.io/ceph/ceph:v20, name=sweet_bhabha, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 04:57:22 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'alerts'
Jan 31 04:57:22 np0005603787 systemd[1]: Started libpod-conmon-7c12def71fc955a9fa27aa3e757d944ff34034e79c08908253ec41439dde2bee.scope.
Jan 31 04:57:22 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:57:22 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0871e14e77705ce20c762c163f545f441c46f97dd2a6e0cd7ac77c6cf2c19bdd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:22 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0871e14e77705ce20c762c163f545f441c46f97dd2a6e0cd7ac77c6cf2c19bdd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:22 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0871e14e77705ce20c762c163f545f441c46f97dd2a6e0cd7ac77c6cf2c19bdd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:22 np0005603787 podman[75454]: 2026-01-31 09:57:22.909472567 +0000 UTC m=+0.116917328 container init 7c12def71fc955a9fa27aa3e757d944ff34034e79c08908253ec41439dde2bee (image=quay.io/ceph/ceph:v20, name=sweet_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 04:57:22 np0005603787 podman[75454]: 2026-01-31 09:57:22.815891683 +0000 UTC m=+0.023336434 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:57:22 np0005603787 podman[75454]: 2026-01-31 09:57:22.91739763 +0000 UTC m=+0.124842371 container start 7c12def71fc955a9fa27aa3e757d944ff34034e79c08908253ec41439dde2bee (image=quay.io/ceph/ceph:v20, name=sweet_bhabha, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:57:22 np0005603787 podman[75454]: 2026-01-31 09:57:22.9203896 +0000 UTC m=+0.127834351 container attach 7c12def71fc955a9fa27aa3e757d944ff34034e79c08908253ec41439dde2bee (image=quay.io/ceph/ceph:v20, name=sweet_bhabha, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:57:22 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'balancer'
Jan 31 04:57:23 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'cephadm'
Jan 31 04:57:23 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 31 04:57:23 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3681701623' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]: 
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]: {
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:    "fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:    "health": {
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:        "status": "HEALTH_OK",
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:        "checks": {},
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:        "mutes": []
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:    },
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:    "election_epoch": 5,
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:    "quorum": [
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:        0
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:    ],
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:    "quorum_names": [
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:        "compute-0"
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:    ],
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:    "quorum_age": 1,
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:    "monmap": {
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:        "epoch": 1,
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:        "min_mon_release_name": "tentacle",
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:        "num_mons": 1
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:    },
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:    "osdmap": {
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:        "epoch": 1,
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:        "num_osds": 0,
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:        "num_up_osds": 0,
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:        "osd_up_since": 0,
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:        "num_in_osds": 0,
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:        "osd_in_since": 0,
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:        "num_remapped_pgs": 0
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:    },
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:    "pgmap": {
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:        "pgs_by_state": [],
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:        "num_pgs": 0,
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:        "num_pools": 0,
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:        "num_objects": 0,
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:        "data_bytes": 0,
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:        "bytes_used": 0,
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:        "bytes_avail": 0,
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:        "bytes_total": 0
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:    },
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:    "fsmap": {
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:        "epoch": 1,
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:        "btime": "2026-01-31T09:57:19:176309+0000",
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:        "by_rank": [],
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:        "up:standby": 0
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:    },
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:    "mgrmap": {
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:        "available": false,
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:        "num_standbys": 0,
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:        "modules": [
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:            "iostat",
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:            "nfs"
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:        ],
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:        "services": {}
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:    },
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:    "servicemap": {
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:        "epoch": 1,
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:        "modified": "2026-01-31T09:57:19.182834+0000",
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:        "services": {}
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:    },
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]:    "progress_events": {}
Jan 31 04:57:23 np0005603787 sweet_bhabha[75490]: }
Jan 31 04:57:23 np0005603787 systemd[1]: libpod-7c12def71fc955a9fa27aa3e757d944ff34034e79c08908253ec41439dde2bee.scope: Deactivated successfully.
Jan 31 04:57:23 np0005603787 podman[75454]: 2026-01-31 09:57:23.138105994 +0000 UTC m=+0.345550725 container died 7c12def71fc955a9fa27aa3e757d944ff34034e79c08908253ec41439dde2bee (image=quay.io/ceph/ceph:v20, name=sweet_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 04:57:23 np0005603787 systemd[1]: var-lib-containers-storage-overlay-0871e14e77705ce20c762c163f545f441c46f97dd2a6e0cd7ac77c6cf2c19bdd-merged.mount: Deactivated successfully.
Jan 31 04:57:23 np0005603787 podman[75454]: 2026-01-31 09:57:23.17869118 +0000 UTC m=+0.386135911 container remove 7c12def71fc955a9fa27aa3e757d944ff34034e79c08908253ec41439dde2bee (image=quay.io/ceph/ceph:v20, name=sweet_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 04:57:23 np0005603787 systemd[1]: libpod-conmon-7c12def71fc955a9fa27aa3e757d944ff34034e79c08908253ec41439dde2bee.scope: Deactivated successfully.
Jan 31 04:57:23 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'crash'
Jan 31 04:57:23 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'dashboard'
Jan 31 04:57:24 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'devicehealth'
Jan 31 04:57:24 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'diskprediction_local'
Jan 31 04:57:24 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mgr-compute-0-mdmqaq[75449]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 31 04:57:24 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mgr-compute-0-mdmqaq[75449]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 31 04:57:24 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mgr-compute-0-mdmqaq[75449]:  from numpy import show_config as show_numpy_config
Jan 31 04:57:24 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'influx'
Jan 31 04:57:24 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'insights'
Jan 31 04:57:24 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'iostat'
Jan 31 04:57:24 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'k8sevents'
Jan 31 04:57:25 np0005603787 podman[75539]: 2026-01-31 09:57:25.244694019 +0000 UTC m=+0.046075224 container create 9c5eeb428a13b02a4577e2f9926489cee12339ed6a679853002bad3ebbb8819f (image=quay.io/ceph/ceph:v20, name=goofy_rhodes, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:57:25 np0005603787 systemd[1]: Started libpod-conmon-9c5eeb428a13b02a4577e2f9926489cee12339ed6a679853002bad3ebbb8819f.scope.
Jan 31 04:57:25 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:57:25 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7afa335b1a66cfa2b647568294bada14c548d78d0048ae910731b5bb509212c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:25 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7afa335b1a66cfa2b647568294bada14c548d78d0048ae910731b5bb509212c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:25 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7afa335b1a66cfa2b647568294bada14c548d78d0048ae910731b5bb509212c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:25 np0005603787 podman[75539]: 2026-01-31 09:57:25.221433116 +0000 UTC m=+0.022814351 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:57:25 np0005603787 podman[75539]: 2026-01-31 09:57:25.3258428 +0000 UTC m=+0.127224015 container init 9c5eeb428a13b02a4577e2f9926489cee12339ed6a679853002bad3ebbb8819f (image=quay.io/ceph/ceph:v20, name=goofy_rhodes, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 04:57:25 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'localpool'
Jan 31 04:57:25 np0005603787 podman[75539]: 2026-01-31 09:57:25.331215594 +0000 UTC m=+0.132596789 container start 9c5eeb428a13b02a4577e2f9926489cee12339ed6a679853002bad3ebbb8819f (image=quay.io/ceph/ceph:v20, name=goofy_rhodes, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:57:25 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'mds_autoscaler'
Jan 31 04:57:25 np0005603787 podman[75539]: 2026-01-31 09:57:25.442961234 +0000 UTC m=+0.244342519 container attach 9c5eeb428a13b02a4577e2f9926489cee12339ed6a679853002bad3ebbb8819f (image=quay.io/ceph/ceph:v20, name=goofy_rhodes, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:57:25 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 31 04:57:25 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3715963599' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]: 
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]: {
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:    "fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:    "health": {
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:        "status": "HEALTH_OK",
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:        "checks": {},
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:        "mutes": []
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:    },
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:    "election_epoch": 5,
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:    "quorum": [
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:        0
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:    ],
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:    "quorum_names": [
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:        "compute-0"
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:    ],
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:    "quorum_age": 4,
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:    "monmap": {
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:        "epoch": 1,
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:        "min_mon_release_name": "tentacle",
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:        "num_mons": 1
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:    },
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:    "osdmap": {
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:        "epoch": 1,
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:        "num_osds": 0,
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:        "num_up_osds": 0,
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:        "osd_up_since": 0,
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:        "num_in_osds": 0,
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:        "osd_in_since": 0,
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:        "num_remapped_pgs": 0
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:    },
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:    "pgmap": {
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:        "pgs_by_state": [],
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:        "num_pgs": 0,
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:        "num_pools": 0,
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:        "num_objects": 0,
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:        "data_bytes": 0,
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:        "bytes_used": 0,
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:        "bytes_avail": 0,
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:        "bytes_total": 0
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:    },
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:    "fsmap": {
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:        "epoch": 1,
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:        "btime": "2026-01-31T09:57:19:176309+0000",
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:        "by_rank": [],
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:        "up:standby": 0
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:    },
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:    "mgrmap": {
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:        "available": false,
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:        "num_standbys": 0,
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:        "modules": [
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:            "iostat",
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:            "nfs"
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:        ],
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:        "services": {}
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:    },
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:    "servicemap": {
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:        "epoch": 1,
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:        "modified": "2026-01-31T09:57:19.182834+0000",
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:        "services": {}
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:    },
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]:    "progress_events": {}
Jan 31 04:57:25 np0005603787 goofy_rhodes[75554]: }
Jan 31 04:57:25 np0005603787 systemd[1]: libpod-9c5eeb428a13b02a4577e2f9926489cee12339ed6a679853002bad3ebbb8819f.scope: Deactivated successfully.
Jan 31 04:57:25 np0005603787 podman[75539]: 2026-01-31 09:57:25.534974405 +0000 UTC m=+0.336355600 container died 9c5eeb428a13b02a4577e2f9926489cee12339ed6a679853002bad3ebbb8819f (image=quay.io/ceph/ceph:v20, name=goofy_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 04:57:25 np0005603787 systemd[1]: var-lib-containers-storage-overlay-b7afa335b1a66cfa2b647568294bada14c548d78d0048ae910731b5bb509212c-merged.mount: Deactivated successfully.
Jan 31 04:57:25 np0005603787 podman[75539]: 2026-01-31 09:57:25.598997259 +0000 UTC m=+0.400378454 container remove 9c5eeb428a13b02a4577e2f9926489cee12339ed6a679853002bad3ebbb8819f (image=quay.io/ceph/ceph:v20, name=goofy_rhodes, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:57:25 np0005603787 systemd[1]: libpod-conmon-9c5eeb428a13b02a4577e2f9926489cee12339ed6a679853002bad3ebbb8819f.scope: Deactivated successfully.
Jan 31 04:57:25 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'mirroring'
Jan 31 04:57:25 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'nfs'
Jan 31 04:57:25 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'orchestrator'
Jan 31 04:57:26 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'osd_perf_query'
Jan 31 04:57:26 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'osd_support'
Jan 31 04:57:26 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'pg_autoscaler'
Jan 31 04:57:26 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'progress'
Jan 31 04:57:26 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'prometheus'
Jan 31 04:57:26 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'rbd_support'
Jan 31 04:57:26 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'rgw'
Jan 31 04:57:27 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'rook'
Jan 31 04:57:27 np0005603787 podman[75592]: 2026-01-31 09:57:27.663197588 +0000 UTC m=+0.045302503 container create 97c53350d1fc01fce4bdfb66cc42351e0b08afefd7261c44dc2dba3f7f7f2507 (image=quay.io/ceph/ceph:v20, name=elegant_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 04:57:27 np0005603787 systemd[1]: Started libpod-conmon-97c53350d1fc01fce4bdfb66cc42351e0b08afefd7261c44dc2dba3f7f7f2507.scope.
Jan 31 04:57:27 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:57:27 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11fd7865eeb80909bbcdc7e7a7d84d14ce9095782c215e0249ed64a7c3a65252/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:27 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11fd7865eeb80909bbcdc7e7a7d84d14ce9095782c215e0249ed64a7c3a65252/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:27 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11fd7865eeb80909bbcdc7e7a7d84d14ce9095782c215e0249ed64a7c3a65252/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:27 np0005603787 podman[75592]: 2026-01-31 09:57:27.639840564 +0000 UTC m=+0.021945499 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:57:27 np0005603787 podman[75592]: 2026-01-31 09:57:27.738563785 +0000 UTC m=+0.120668700 container init 97c53350d1fc01fce4bdfb66cc42351e0b08afefd7261c44dc2dba3f7f7f2507 (image=quay.io/ceph/ceph:v20, name=elegant_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:57:27 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'selftest'
Jan 31 04:57:27 np0005603787 podman[75592]: 2026-01-31 09:57:27.745773208 +0000 UTC m=+0.127878123 container start 97c53350d1fc01fce4bdfb66cc42351e0b08afefd7261c44dc2dba3f7f7f2507 (image=quay.io/ceph/ceph:v20, name=elegant_germain, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 04:57:27 np0005603787 podman[75592]: 2026-01-31 09:57:27.750231257 +0000 UTC m=+0.132336222 container attach 97c53350d1fc01fce4bdfb66cc42351e0b08afefd7261c44dc2dba3f7f7f2507 (image=quay.io/ceph/ceph:v20, name=elegant_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:57:27 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'smb'
Jan 31 04:57:27 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 31 04:57:27 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1931109379' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 31 04:57:27 np0005603787 elegant_germain[75610]: 
Jan 31 04:57:27 np0005603787 elegant_germain[75610]: {
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:    "fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:    "health": {
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:        "status": "HEALTH_OK",
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:        "checks": {},
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:        "mutes": []
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:    },
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:    "election_epoch": 5,
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:    "quorum": [
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:        0
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:    ],
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:    "quorum_names": [
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:        "compute-0"
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:    ],
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:    "quorum_age": 6,
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:    "monmap": {
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:        "epoch": 1,
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:        "min_mon_release_name": "tentacle",
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:        "num_mons": 1
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:    },
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:    "osdmap": {
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:        "epoch": 1,
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:        "num_osds": 0,
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:        "num_up_osds": 0,
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:        "osd_up_since": 0,
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:        "num_in_osds": 0,
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:        "osd_in_since": 0,
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:        "num_remapped_pgs": 0
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:    },
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:    "pgmap": {
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:        "pgs_by_state": [],
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:        "num_pgs": 0,
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:        "num_pools": 0,
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:        "num_objects": 0,
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:        "data_bytes": 0,
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:        "bytes_used": 0,
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:        "bytes_avail": 0,
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:        "bytes_total": 0
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:    },
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:    "fsmap": {
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:        "epoch": 1,
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:        "btime": "2026-01-31T09:57:19:176309+0000",
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:        "by_rank": [],
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:        "up:standby": 0
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:    },
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:    "mgrmap": {
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:        "available": false,
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:        "num_standbys": 0,
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:        "modules": [
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:            "iostat",
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:            "nfs"
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:        ],
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:        "services": {}
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:    },
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:    "servicemap": {
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:        "epoch": 1,
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:        "modified": "2026-01-31T09:57:19.182834+0000",
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:        "services": {}
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:    },
Jan 31 04:57:27 np0005603787 elegant_germain[75610]:    "progress_events": {}
Jan 31 04:57:27 np0005603787 elegant_germain[75610]: }
Jan 31 04:57:27 np0005603787 systemd[1]: libpod-97c53350d1fc01fce4bdfb66cc42351e0b08afefd7261c44dc2dba3f7f7f2507.scope: Deactivated successfully.
Jan 31 04:57:27 np0005603787 podman[75636]: 2026-01-31 09:57:27.99177043 +0000 UTC m=+0.021180398 container died 97c53350d1fc01fce4bdfb66cc42351e0b08afefd7261c44dc2dba3f7f7f2507 (image=quay.io/ceph/ceph:v20, name=elegant_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 04:57:28 np0005603787 systemd[1]: var-lib-containers-storage-overlay-11fd7865eeb80909bbcdc7e7a7d84d14ce9095782c215e0249ed64a7c3a65252-merged.mount: Deactivated successfully.
Jan 31 04:57:28 np0005603787 podman[75636]: 2026-01-31 09:57:28.032237583 +0000 UTC m=+0.061647541 container remove 97c53350d1fc01fce4bdfb66cc42351e0b08afefd7261c44dc2dba3f7f7f2507 (image=quay.io/ceph/ceph:v20, name=elegant_germain, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 04:57:28 np0005603787 systemd[1]: libpod-conmon-97c53350d1fc01fce4bdfb66cc42351e0b08afefd7261c44dc2dba3f7f7f2507.scope: Deactivated successfully.
Jan 31 04:57:28 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'snap_schedule'
Jan 31 04:57:28 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'stats'
Jan 31 04:57:28 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'status'
Jan 31 04:57:28 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'telegraf'
Jan 31 04:57:28 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'telemetry'
Jan 31 04:57:28 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'test_orchestrator'
Jan 31 04:57:28 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'volumes'
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: ms_deliver_dispatch: unhandled message 0x56181631f860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 31 04:57:29 np0005603787 ceph-mon[75160]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.mdmqaq
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: mgr handle_mgr_map Activating!
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: mgr handle_mgr_map I am now activating
Jan 31 04:57:29 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.mdmqaq(active, starting, since 0.0114708s)
Jan 31 04:57:29 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 31 04:57:29 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/793543920' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "mds metadata"} : dispatch
Jan 31 04:57:29 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).mds e1 all = 1
Jan 31 04:57:29 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 31 04:57:29 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/793543920' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata"} : dispatch
Jan 31 04:57:29 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 31 04:57:29 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/793543920' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "mon metadata"} : dispatch
Jan 31 04:57:29 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 31 04:57:29 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/793543920' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 31 04:57:29 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.mdmqaq", "id": "compute-0.mdmqaq"} v 0)
Jan 31 04:57:29 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/793543920' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "mgr metadata", "who": "compute-0.mdmqaq", "id": "compute-0.mdmqaq"} : dispatch
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: mgr load Constructed class from module: balancer
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: [balancer INFO root] Starting
Jan 31 04:57:29 np0005603787 ceph-mon[75160]: log_channel(cluster) log [INF] : Manager daemon compute-0.mdmqaq is now available
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: mgr load Constructed class from module: crash
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_09:57:29
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: [balancer INFO root] do_upmap
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: [balancer INFO root] No pools available
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: mgr load Constructed class from module: devicehealth
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: mgr load Constructed class from module: iostat
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: [devicehealth INFO root] Starting
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: mgr load Constructed class from module: nfs
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: mgr load Constructed class from module: orchestrator
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: mgr load Constructed class from module: pg_autoscaler
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: mgr load Constructed class from module: progress
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: [progress INFO root] Loading...
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: [progress INFO root] No stored events to load
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: [progress INFO root] Loaded [] historic events
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: [progress INFO root] Loaded OSDMap, ready.
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] recovery thread starting
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] starting setup
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: mgr load Constructed class from module: rbd_support
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: mgr load Constructed class from module: status
Jan 31 04:57:29 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mdmqaq/mirror_snapshot_schedule"} v 0)
Jan 31 04:57:29 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/793543920' entity='mgr.compute-0.mdmqaq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mdmqaq/mirror_snapshot_schedule"} : dispatch
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: mgr load Constructed class from module: telemetry
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 04:57:29 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] PerfHandler: starting
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TaskHandler: starting
Jan 31 04:57:29 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mdmqaq/trash_purge_schedule"} v 0)
Jan 31 04:57:29 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/793543920' entity='mgr.compute-0.mdmqaq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mdmqaq/trash_purge_schedule"} : dispatch
Jan 31 04:57:29 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/793543920' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 04:57:29 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] setup complete
Jan 31 04:57:29 np0005603787 ceph-mgr[75453]: mgr load Constructed class from module: volumes
Jan 31 04:57:29 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/793543920' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:29 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Jan 31 04:57:29 np0005603787 ceph-mon[75160]: Activating manager daemon compute-0.mdmqaq
Jan 31 04:57:29 np0005603787 ceph-mon[75160]: Manager daemon compute-0.mdmqaq is now available
Jan 31 04:57:29 np0005603787 ceph-mon[75160]: from='mgr.14102 192.168.122.100:0/793543920' entity='mgr.compute-0.mdmqaq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mdmqaq/mirror_snapshot_schedule"} : dispatch
Jan 31 04:57:29 np0005603787 ceph-mon[75160]: from='mgr.14102 192.168.122.100:0/793543920' entity='mgr.compute-0.mdmqaq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mdmqaq/trash_purge_schedule"} : dispatch
Jan 31 04:57:29 np0005603787 ceph-mon[75160]: from='mgr.14102 192.168.122.100:0/793543920' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:29 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/793543920' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:30 np0005603787 podman[75729]: 2026-01-31 09:57:30.0870391 +0000 UTC m=+0.035126250 container create 6e62b67fd005bf7ece494dfb34fd53122ca10557a891fd15eeecc25afd5d6acf (image=quay.io/ceph/ceph:v20, name=zealous_hoover, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:57:30 np0005603787 systemd[1]: Started libpod-conmon-6e62b67fd005bf7ece494dfb34fd53122ca10557a891fd15eeecc25afd5d6acf.scope.
Jan 31 04:57:30 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.mdmqaq(active, since 1.02285s)
Jan 31 04:57:30 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:57:30 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3046f0f7ba437d40920d5efad604585b713dcb572e0f302c363e0520741834b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:30 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3046f0f7ba437d40920d5efad604585b713dcb572e0f302c363e0520741834b7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:30 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3046f0f7ba437d40920d5efad604585b713dcb572e0f302c363e0520741834b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:30 np0005603787 podman[75729]: 2026-01-31 09:57:30.068785432 +0000 UTC m=+0.016872632 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:57:30 np0005603787 podman[75729]: 2026-01-31 09:57:30.169823246 +0000 UTC m=+0.117910426 container init 6e62b67fd005bf7ece494dfb34fd53122ca10557a891fd15eeecc25afd5d6acf (image=quay.io/ceph/ceph:v20, name=zealous_hoover, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 04:57:30 np0005603787 podman[75729]: 2026-01-31 09:57:30.174490311 +0000 UTC m=+0.122577461 container start 6e62b67fd005bf7ece494dfb34fd53122ca10557a891fd15eeecc25afd5d6acf (image=quay.io/ceph/ceph:v20, name=zealous_hoover, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 04:57:30 np0005603787 podman[75729]: 2026-01-31 09:57:30.296786113 +0000 UTC m=+0.244873263 container attach 6e62b67fd005bf7ece494dfb34fd53122ca10557a891fd15eeecc25afd5d6acf (image=quay.io/ceph/ceph:v20, name=zealous_hoover, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 04:57:30 np0005603787 ceph-mon[75160]: from='mgr.14102 192.168.122.100:0/793543920' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:30 np0005603787 ceph-mon[75160]: from='mgr.14102 192.168.122.100:0/793543920' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:30 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 31 04:57:30 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3524998011' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]: 
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]: {
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:    "fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:    "health": {
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:        "status": "HEALTH_OK",
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:        "checks": {},
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:        "mutes": []
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:    },
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:    "election_epoch": 5,
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:    "quorum": [
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:        0
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:    ],
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:    "quorum_names": [
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:        "compute-0"
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:    ],
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:    "quorum_age": 9,
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:    "monmap": {
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:        "epoch": 1,
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:        "min_mon_release_name": "tentacle",
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:        "num_mons": 1
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:    },
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:    "osdmap": {
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:        "epoch": 1,
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:        "num_osds": 0,
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:        "num_up_osds": 0,
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:        "osd_up_since": 0,
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:        "num_in_osds": 0,
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:        "osd_in_since": 0,
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:        "num_remapped_pgs": 0
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:    },
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:    "pgmap": {
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:        "pgs_by_state": [],
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:        "num_pgs": 0,
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:        "num_pools": 0,
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:        "num_objects": 0,
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:        "data_bytes": 0,
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:        "bytes_used": 0,
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:        "bytes_avail": 0,
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:        "bytes_total": 0
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:    },
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:    "fsmap": {
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:        "epoch": 1,
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:        "btime": "2026-01-31T09:57:19:176309+0000",
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:        "by_rank": [],
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:        "up:standby": 0
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:    },
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:    "mgrmap": {
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:        "available": true,
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:        "num_standbys": 0,
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:        "modules": [
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:            "iostat",
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:            "nfs"
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:        ],
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:        "services": {}
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:    },
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:    "servicemap": {
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:        "epoch": 1,
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:        "modified": "2026-01-31T09:57:19.182834+0000",
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:        "services": {}
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:    },
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]:    "progress_events": {}
Jan 31 04:57:30 np0005603787 zealous_hoover[75746]: }
Jan 31 04:57:30 np0005603787 systemd[1]: libpod-6e62b67fd005bf7ece494dfb34fd53122ca10557a891fd15eeecc25afd5d6acf.scope: Deactivated successfully.
Jan 31 04:57:30 np0005603787 podman[75729]: 2026-01-31 09:57:30.695245724 +0000 UTC m=+0.643332904 container died 6e62b67fd005bf7ece494dfb34fd53122ca10557a891fd15eeecc25afd5d6acf (image=quay.io/ceph/ceph:v20, name=zealous_hoover, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 04:57:31 np0005603787 ceph-mgr[75453]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 04:57:31 np0005603787 ceph-mgr[75453]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 04:57:31 np0005603787 systemd[1]: var-lib-containers-storage-overlay-3046f0f7ba437d40920d5efad604585b713dcb572e0f302c363e0520741834b7-merged.mount: Deactivated successfully.
Jan 31 04:57:33 np0005603787 ceph-mgr[75453]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 04:57:33 np0005603787 ceph-mgr[75453]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 04:57:33 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.mdmqaq(active, since 4s)
Jan 31 04:57:35 np0005603787 ceph-mgr[75453]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 04:57:35 np0005603787 ceph-mgr[75453]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 04:57:35 np0005603787 podman[75729]: 2026-01-31 09:57:35.160356133 +0000 UTC m=+5.108443273 container remove 6e62b67fd005bf7ece494dfb34fd53122ca10557a891fd15eeecc25afd5d6acf (image=quay.io/ceph/ceph:v20, name=zealous_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:57:35 np0005603787 podman[75786]: 2026-01-31 09:57:35.216786692 +0000 UTC m=+0.040911835 container create 418fec6c3c822e42f2b677d6261cdcabefff1e2fc406ca64b27b9534b4a8fcfb (image=quay.io/ceph/ceph:v20, name=silly_poitras, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 04:57:35 np0005603787 systemd[1]: Started libpod-conmon-418fec6c3c822e42f2b677d6261cdcabefff1e2fc406ca64b27b9534b4a8fcfb.scope.
Jan 31 04:57:35 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:57:35 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcbfac81de9a277459385d8e1344a84941f782d431558cca75d1480435e29dec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:35 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcbfac81de9a277459385d8e1344a84941f782d431558cca75d1480435e29dec/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:35 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcbfac81de9a277459385d8e1344a84941f782d431558cca75d1480435e29dec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:35 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcbfac81de9a277459385d8e1344a84941f782d431558cca75d1480435e29dec/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:35 np0005603787 podman[75786]: 2026-01-31 09:57:35.284908955 +0000 UTC m=+0.109034108 container init 418fec6c3c822e42f2b677d6261cdcabefff1e2fc406ca64b27b9534b4a8fcfb (image=quay.io/ceph/ceph:v20, name=silly_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3)
Jan 31 04:57:35 np0005603787 systemd[1]: libpod-conmon-6e62b67fd005bf7ece494dfb34fd53122ca10557a891fd15eeecc25afd5d6acf.scope: Deactivated successfully.
Jan 31 04:57:35 np0005603787 podman[75786]: 2026-01-31 09:57:35.199066299 +0000 UTC m=+0.023191462 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:57:35 np0005603787 podman[75786]: 2026-01-31 09:57:35.296023163 +0000 UTC m=+0.120148306 container start 418fec6c3c822e42f2b677d6261cdcabefff1e2fc406ca64b27b9534b4a8fcfb (image=quay.io/ceph/ceph:v20, name=silly_poitras, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 04:57:35 np0005603787 podman[75786]: 2026-01-31 09:57:35.300673477 +0000 UTC m=+0.124798620 container attach 418fec6c3c822e42f2b677d6261cdcabefff1e2fc406ca64b27b9534b4a8fcfb (image=quay.io/ceph/ceph:v20, name=silly_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 04:57:35 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 31 04:57:35 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2529362342' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 31 04:57:35 np0005603787 silly_poitras[75800]: 
Jan 31 04:57:35 np0005603787 silly_poitras[75800]: [global]
Jan 31 04:57:35 np0005603787 silly_poitras[75800]: #011fsid = 962d77ae-dc67-5de8-89d8-3d1670c67b61
Jan 31 04:57:35 np0005603787 silly_poitras[75800]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 31 04:57:35 np0005603787 silly_poitras[75800]: #011osd_crush_chooseleaf_type = 0
Jan 31 04:57:35 np0005603787 systemd[1]: libpod-418fec6c3c822e42f2b677d6261cdcabefff1e2fc406ca64b27b9534b4a8fcfb.scope: Deactivated successfully.
Jan 31 04:57:35 np0005603787 podman[75786]: 2026-01-31 09:57:35.683393157 +0000 UTC m=+0.507518300 container died 418fec6c3c822e42f2b677d6261cdcabefff1e2fc406ca64b27b9534b4a8fcfb (image=quay.io/ceph/ceph:v20, name=silly_poitras, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:57:35 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/2529362342' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 31 04:57:35 np0005603787 systemd[1]: var-lib-containers-storage-overlay-dcbfac81de9a277459385d8e1344a84941f782d431558cca75d1480435e29dec-merged.mount: Deactivated successfully.
Jan 31 04:57:35 np0005603787 podman[75786]: 2026-01-31 09:57:35.727425925 +0000 UTC m=+0.551551068 container remove 418fec6c3c822e42f2b677d6261cdcabefff1e2fc406ca64b27b9534b4a8fcfb (image=quay.io/ceph/ceph:v20, name=silly_poitras, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:57:35 np0005603787 systemd[1]: libpod-conmon-418fec6c3c822e42f2b677d6261cdcabefff1e2fc406ca64b27b9534b4a8fcfb.scope: Deactivated successfully.
Jan 31 04:57:35 np0005603787 podman[75839]: 2026-01-31 09:57:35.777422443 +0000 UTC m=+0.035854420 container create aac7f2d0282ee38f4d0eb4e7dc9e3f048fd6e56b48d484f403388c48f200450a (image=quay.io/ceph/ceph:v20, name=musing_knuth, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 04:57:35 np0005603787 systemd[1]: Started libpod-conmon-aac7f2d0282ee38f4d0eb4e7dc9e3f048fd6e56b48d484f403388c48f200450a.scope.
Jan 31 04:57:35 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:57:35 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a5310e8d9a95bf296d9bc6c302881ac5ed5f8c5828b73c7b9303669f475d14a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:35 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a5310e8d9a95bf296d9bc6c302881ac5ed5f8c5828b73c7b9303669f475d14a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:35 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a5310e8d9a95bf296d9bc6c302881ac5ed5f8c5828b73c7b9303669f475d14a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:35 np0005603787 podman[75839]: 2026-01-31 09:57:35.834825449 +0000 UTC m=+0.093257456 container init aac7f2d0282ee38f4d0eb4e7dc9e3f048fd6e56b48d484f403388c48f200450a (image=quay.io/ceph/ceph:v20, name=musing_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 04:57:35 np0005603787 podman[75839]: 2026-01-31 09:57:35.839023422 +0000 UTC m=+0.097455399 container start aac7f2d0282ee38f4d0eb4e7dc9e3f048fd6e56b48d484f403388c48f200450a (image=quay.io/ceph/ceph:v20, name=musing_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:57:35 np0005603787 podman[75839]: 2026-01-31 09:57:35.84344361 +0000 UTC m=+0.101875607 container attach aac7f2d0282ee38f4d0eb4e7dc9e3f048fd6e56b48d484f403388c48f200450a (image=quay.io/ceph/ceph:v20, name=musing_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:57:35 np0005603787 podman[75839]: 2026-01-31 09:57:35.760458479 +0000 UTC m=+0.018890476 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:57:36 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Jan 31 04:57:36 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2603720397' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Jan 31 04:57:36 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/2603720397' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Jan 31 04:57:36 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2603720397' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 31 04:57:36 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.mdmqaq(active, since 7s)
Jan 31 04:57:36 np0005603787 ceph-mgr[75453]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 31 04:57:36 np0005603787 ceph-mgr[75453]: mgr respawn  e: '/usr/bin/ceph-mgr'
Jan 31 04:57:36 np0005603787 ceph-mgr[75453]: mgr respawn  0: '/usr/bin/ceph-mgr'
Jan 31 04:57:36 np0005603787 ceph-mgr[75453]: mgr respawn  1: '-n'
Jan 31 04:57:36 np0005603787 ceph-mgr[75453]: mgr respawn  2: 'mgr.compute-0.mdmqaq'
Jan 31 04:57:36 np0005603787 ceph-mgr[75453]: mgr respawn  3: '-f'
Jan 31 04:57:36 np0005603787 ceph-mgr[75453]: mgr respawn  4: '--setuser'
Jan 31 04:57:36 np0005603787 ceph-mgr[75453]: mgr respawn  5: 'ceph'
Jan 31 04:57:36 np0005603787 ceph-mgr[75453]: mgr respawn  6: '--setgroup'
Jan 31 04:57:36 np0005603787 ceph-mgr[75453]: mgr respawn  7: 'ceph'
Jan 31 04:57:36 np0005603787 ceph-mgr[75453]: mgr respawn  8: '--default-log-to-file=false'
Jan 31 04:57:36 np0005603787 ceph-mgr[75453]: mgr respawn  9: '--default-log-to-journald=true'
Jan 31 04:57:36 np0005603787 ceph-mgr[75453]: mgr respawn  10: '--default-log-to-stderr=false'
Jan 31 04:57:36 np0005603787 ceph-mgr[75453]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Jan 31 04:57:36 np0005603787 ceph-mgr[75453]: mgr respawn  exe_path /proc/self/exe
Jan 31 04:57:36 np0005603787 systemd[1]: libpod-aac7f2d0282ee38f4d0eb4e7dc9e3f048fd6e56b48d484f403388c48f200450a.scope: Deactivated successfully.
Jan 31 04:57:36 np0005603787 podman[75839]: 2026-01-31 09:57:36.741021185 +0000 UTC m=+0.999453172 container died aac7f2d0282ee38f4d0eb4e7dc9e3f048fd6e56b48d484f403388c48f200450a (image=quay.io/ceph/ceph:v20, name=musing_knuth, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:57:36 np0005603787 systemd[1]: var-lib-containers-storage-overlay-6a5310e8d9a95bf296d9bc6c302881ac5ed5f8c5828b73c7b9303669f475d14a-merged.mount: Deactivated successfully.
Jan 31 04:57:36 np0005603787 podman[75839]: 2026-01-31 09:57:36.78233162 +0000 UTC m=+1.040763597 container remove aac7f2d0282ee38f4d0eb4e7dc9e3f048fd6e56b48d484f403388c48f200450a (image=quay.io/ceph/ceph:v20, name=musing_knuth, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:57:36 np0005603787 systemd[1]: libpod-conmon-aac7f2d0282ee38f4d0eb4e7dc9e3f048fd6e56b48d484f403388c48f200450a.scope: Deactivated successfully.
Jan 31 04:57:36 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mgr-compute-0-mdmqaq[75449]: ignoring --setuser ceph since I am not root
Jan 31 04:57:36 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mgr-compute-0-mdmqaq[75449]: ignoring --setgroup ceph since I am not root
Jan 31 04:57:36 np0005603787 ceph-mgr[75453]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 31 04:57:36 np0005603787 ceph-mgr[75453]: pidfile_write: ignore empty --pid-file
Jan 31 04:57:36 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'alerts'
Jan 31 04:57:36 np0005603787 podman[75892]: 2026-01-31 09:57:36.849070256 +0000 UTC m=+0.046451273 container create 9eb6b3c17dcd0fedf0b47999caa5c12ca55b3b72ea52a7ae1555157d3c7a21d9 (image=quay.io/ceph/ceph:v20, name=modest_mclaren, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:57:36 np0005603787 systemd[1]: Started libpod-conmon-9eb6b3c17dcd0fedf0b47999caa5c12ca55b3b72ea52a7ae1555157d3c7a21d9.scope.
Jan 31 04:57:36 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:57:36 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d54c1dc9892bb73f7abf2ad1ac45f4eba18ed40ccaabedcc58f2a07cc017dc7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:36 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d54c1dc9892bb73f7abf2ad1ac45f4eba18ed40ccaabedcc58f2a07cc017dc7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:36 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d54c1dc9892bb73f7abf2ad1ac45f4eba18ed40ccaabedcc58f2a07cc017dc7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:36 np0005603787 podman[75892]: 2026-01-31 09:57:36.920100517 +0000 UTC m=+0.117481564 container init 9eb6b3c17dcd0fedf0b47999caa5c12ca55b3b72ea52a7ae1555157d3c7a21d9 (image=quay.io/ceph/ceph:v20, name=modest_mclaren, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 04:57:36 np0005603787 podman[75892]: 2026-01-31 09:57:36.825741012 +0000 UTC m=+0.023122069 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:57:36 np0005603787 podman[75892]: 2026-01-31 09:57:36.923780015 +0000 UTC m=+0.121161022 container start 9eb6b3c17dcd0fedf0b47999caa5c12ca55b3b72ea52a7ae1555157d3c7a21d9 (image=quay.io/ceph/ceph:v20, name=modest_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 04:57:36 np0005603787 podman[75892]: 2026-01-31 09:57:36.927351651 +0000 UTC m=+0.124732688 container attach 9eb6b3c17dcd0fedf0b47999caa5c12ca55b3b72ea52a7ae1555157d3c7a21d9 (image=quay.io/ceph/ceph:v20, name=modest_mclaren, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:57:36 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'balancer'
Jan 31 04:57:37 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'cephadm'
Jan 31 04:57:37 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 31 04:57:37 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3986393391' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 31 04:57:37 np0005603787 modest_mclaren[75928]: {
Jan 31 04:57:37 np0005603787 modest_mclaren[75928]:    "epoch": 5,
Jan 31 04:57:37 np0005603787 modest_mclaren[75928]:    "available": true,
Jan 31 04:57:37 np0005603787 modest_mclaren[75928]:    "active_name": "compute-0.mdmqaq",
Jan 31 04:57:37 np0005603787 modest_mclaren[75928]:    "num_standby": 0
Jan 31 04:57:37 np0005603787 modest_mclaren[75928]: }
Jan 31 04:57:37 np0005603787 systemd[1]: libpod-9eb6b3c17dcd0fedf0b47999caa5c12ca55b3b72ea52a7ae1555157d3c7a21d9.scope: Deactivated successfully.
Jan 31 04:57:37 np0005603787 podman[75892]: 2026-01-31 09:57:37.392240389 +0000 UTC m=+0.589621406 container died 9eb6b3c17dcd0fedf0b47999caa5c12ca55b3b72ea52a7ae1555157d3c7a21d9 (image=quay.io/ceph/ceph:v20, name=modest_mclaren, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:57:37 np0005603787 systemd[1]: var-lib-containers-storage-overlay-5d54c1dc9892bb73f7abf2ad1ac45f4eba18ed40ccaabedcc58f2a07cc017dc7-merged.mount: Deactivated successfully.
Jan 31 04:57:37 np0005603787 podman[75892]: 2026-01-31 09:57:37.432319402 +0000 UTC m=+0.629700419 container remove 9eb6b3c17dcd0fedf0b47999caa5c12ca55b3b72ea52a7ae1555157d3c7a21d9 (image=quay.io/ceph/ceph:v20, name=modest_mclaren, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:57:37 np0005603787 systemd[1]: libpod-conmon-9eb6b3c17dcd0fedf0b47999caa5c12ca55b3b72ea52a7ae1555157d3c7a21d9.scope: Deactivated successfully.
Jan 31 04:57:37 np0005603787 podman[75976]: 2026-01-31 09:57:37.484122898 +0000 UTC m=+0.036766965 container create 869ea3e346dac21c9bba93ee5f6402b436c2de9bb8a3f8743a59c0c21bd49f4e (image=quay.io/ceph/ceph:v20, name=frosty_goldwasser, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 04:57:37 np0005603787 systemd[1]: Started libpod-conmon-869ea3e346dac21c9bba93ee5f6402b436c2de9bb8a3f8743a59c0c21bd49f4e.scope.
Jan 31 04:57:37 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:57:37 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f57621c8cde7dbe57fda68a038ab7fd5cb9b641bc2c5080cbfe1fa88b067194/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:37 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f57621c8cde7dbe57fda68a038ab7fd5cb9b641bc2c5080cbfe1fa88b067194/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:37 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f57621c8cde7dbe57fda68a038ab7fd5cb9b641bc2c5080cbfe1fa88b067194/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:37 np0005603787 podman[75976]: 2026-01-31 09:57:37.545661884 +0000 UTC m=+0.098305971 container init 869ea3e346dac21c9bba93ee5f6402b436c2de9bb8a3f8743a59c0c21bd49f4e (image=quay.io/ceph/ceph:v20, name=frosty_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:57:37 np0005603787 podman[75976]: 2026-01-31 09:57:37.549417034 +0000 UTC m=+0.102061101 container start 869ea3e346dac21c9bba93ee5f6402b436c2de9bb8a3f8743a59c0c21bd49f4e (image=quay.io/ceph/ceph:v20, name=frosty_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 04:57:37 np0005603787 podman[75976]: 2026-01-31 09:57:37.553516524 +0000 UTC m=+0.106160591 container attach 869ea3e346dac21c9bba93ee5f6402b436c2de9bb8a3f8743a59c0c21bd49f4e (image=quay.io/ceph/ceph:v20, name=frosty_goldwasser, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 04:57:37 np0005603787 podman[75976]: 2026-01-31 09:57:37.467609646 +0000 UTC m=+0.020253733 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:57:37 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'crash'
Jan 31 04:57:37 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/2603720397' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 31 04:57:37 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'dashboard'
Jan 31 04:57:38 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'devicehealth'
Jan 31 04:57:38 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'diskprediction_local'
Jan 31 04:57:38 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mgr-compute-0-mdmqaq[75449]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 31 04:57:38 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mgr-compute-0-mdmqaq[75449]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 31 04:57:38 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mgr-compute-0-mdmqaq[75449]:  from numpy import show_config as show_numpy_config
Jan 31 04:57:38 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'influx'
Jan 31 04:57:38 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'insights'
Jan 31 04:57:38 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'iostat'
Jan 31 04:57:38 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'k8sevents'
Jan 31 04:57:39 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'localpool'
Jan 31 04:57:39 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'mds_autoscaler'
Jan 31 04:57:39 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'mirroring'
Jan 31 04:57:39 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'nfs'
Jan 31 04:57:39 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'orchestrator'
Jan 31 04:57:40 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'osd_perf_query'
Jan 31 04:57:40 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'osd_support'
Jan 31 04:57:40 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'pg_autoscaler'
Jan 31 04:57:40 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'progress'
Jan 31 04:57:40 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'prometheus'
Jan 31 04:57:40 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'rbd_support'
Jan 31 04:57:40 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'rgw'
Jan 31 04:57:41 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'rook'
Jan 31 04:57:41 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'selftest'
Jan 31 04:57:41 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'smb'
Jan 31 04:57:42 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'snap_schedule'
Jan 31 04:57:42 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'stats'
Jan 31 04:57:42 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'status'
Jan 31 04:57:42 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'telegraf'
Jan 31 04:57:42 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'telemetry'
Jan 31 04:57:42 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'test_orchestrator'
Jan 31 04:57:42 np0005603787 ceph-mgr[75453]: mgr[py] Loading python module 'volumes'
Jan 31 04:57:43 np0005603787 ceph-mon[75160]: log_channel(cluster) log [INF] : Active manager daemon compute-0.mdmqaq restarted
Jan 31 04:57:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Jan 31 04:57:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 04:57:43 np0005603787 ceph-mon[75160]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.mdmqaq
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: ms_deliver_dispatch: unhandled message 0x55891a400000 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 31 04:57:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.2 inc ratio 0.4 full ratio 0.4
Jan 31 04:57:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 31 04:57:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: mgr handle_mgr_map Activating!
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: mgr handle_mgr_map I am now activating
Jan 31 04:57:43 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Jan 31 04:57:43 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.mdmqaq(active, starting, since 0.0145405s)
Jan 31 04:57:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 31 04:57:43 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 31 04:57:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.mdmqaq", "id": "compute-0.mdmqaq"} v 0)
Jan 31 04:57:43 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "mgr metadata", "who": "compute-0.mdmqaq", "id": "compute-0.mdmqaq"} : dispatch
Jan 31 04:57:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 31 04:57:43 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "mds metadata"} : dispatch
Jan 31 04:57:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).mds e1 all = 1
Jan 31 04:57:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 31 04:57:43 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata"} : dispatch
Jan 31 04:57:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 31 04:57:43 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "mon metadata"} : dispatch
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: mgr load Constructed class from module: balancer
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Starting
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 04:57:43 np0005603787 ceph-mon[75160]: log_channel(cluster) log [INF] : Manager daemon compute-0.mdmqaq is now available
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_09:57:43
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] do_upmap
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] No pools available
Jan 31 04:57:43 np0005603787 ceph-mon[75160]: Active manager daemon compute-0.mdmqaq restarted
Jan 31 04:57:43 np0005603787 ceph-mon[75160]: Activating manager daemon compute-0.mdmqaq
Jan 31 04:57:43 np0005603787 ceph-mon[75160]: Manager daemon compute-0.mdmqaq is now available
Jan 31 04:57:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.cephadm_root_ca_cert}] v 0)
Jan 31 04:57:43 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.cephadm_root_ca_key}] v 0)
Jan 31 04:57:43 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Jan 31 04:57:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Jan 31 04:57:43 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Jan 31 04:57:43 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: mgr load Constructed class from module: cephadm
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: mgr load Constructed class from module: crash
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: mgr load Constructed class from module: devicehealth
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: mgr load Constructed class from module: iostat
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: [devicehealth INFO root] Starting
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: mgr load Constructed class from module: nfs
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: mgr load Constructed class from module: orchestrator
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: mgr load Constructed class from module: pg_autoscaler
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: mgr load Constructed class from module: progress
Jan 31 04:57:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 31 04:57:43 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: [progress INFO root] Loading...
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: [progress INFO root] No stored events to load
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: [progress INFO root] Loaded [] historic events
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: [progress INFO root] Loaded OSDMap, ready.
Jan 31 04:57:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 31 04:57:43 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] recovery thread starting
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] starting setup
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: mgr load Constructed class from module: rbd_support
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: mgr load Constructed class from module: status
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: mgr load Constructed class from module: telemetry
Jan 31 04:57:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mdmqaq/mirror_snapshot_schedule"} v 0)
Jan 31 04:57:43 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mdmqaq/mirror_snapshot_schedule"} : dispatch
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] PerfHandler: starting
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TaskHandler: starting
Jan 31 04:57:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mdmqaq/trash_purge_schedule"} v 0)
Jan 31 04:57:43 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mdmqaq/trash_purge_schedule"} : dispatch
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] setup complete
Jan 31 04:57:43 np0005603787 ceph-mgr[75453]: mgr load Constructed class from module: volumes
Jan 31 04:57:44 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Jan 31 04:57:44 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.mdmqaq(active, since 1.02473s)
Jan 31 04:57:44 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Jan 31 04:57:44 np0005603787 frosty_goldwasser[75992]: {
Jan 31 04:57:44 np0005603787 frosty_goldwasser[75992]:    "mgrmap_epoch": 7,
Jan 31 04:57:44 np0005603787 frosty_goldwasser[75992]:    "initialized": true
Jan 31 04:57:44 np0005603787 frosty_goldwasser[75992]: }
Jan 31 04:57:44 np0005603787 systemd[1]: libpod-869ea3e346dac21c9bba93ee5f6402b436c2de9bb8a3f8743a59c0c21bd49f4e.scope: Deactivated successfully.
Jan 31 04:57:44 np0005603787 podman[75976]: 2026-01-31 09:57:44.068349776 +0000 UTC m=+6.620993853 container died 869ea3e346dac21c9bba93ee5f6402b436c2de9bb8a3f8743a59c0c21bd49f4e (image=quay.io/ceph/ceph:v20, name=frosty_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:57:44 np0005603787 systemd[1]: var-lib-containers-storage-overlay-2f57621c8cde7dbe57fda68a038ab7fd5cb9b641bc2c5080cbfe1fa88b067194-merged.mount: Deactivated successfully.
Jan 31 04:57:44 np0005603787 podman[75976]: 2026-01-31 09:57:44.10887754 +0000 UTC m=+6.661521617 container remove 869ea3e346dac21c9bba93ee5f6402b436c2de9bb8a3f8743a59c0c21bd49f4e (image=quay.io/ceph/ceph:v20, name=frosty_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 04:57:44 np0005603787 systemd[1]: libpod-conmon-869ea3e346dac21c9bba93ee5f6402b436c2de9bb8a3f8743a59c0c21bd49f4e.scope: Deactivated successfully.
Jan 31 04:57:44 np0005603787 podman[76140]: 2026-01-31 09:57:44.161614432 +0000 UTC m=+0.038728058 container create 1e8097e30fda3dfed4824e41f871e029db832fca29d33547c4483e35f634da3f (image=quay.io/ceph/ceph:v20, name=modest_tu, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 04:57:44 np0005603787 systemd[1]: Started libpod-conmon-1e8097e30fda3dfed4824e41f871e029db832fca29d33547c4483e35f634da3f.scope.
Jan 31 04:57:44 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:57:44 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb0741e7ae04efc178650dff2f1f08e2d7b80c16a0c36291b962fa8acc069d71/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:44 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb0741e7ae04efc178650dff2f1f08e2d7b80c16a0c36291b962fa8acc069d71/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:44 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb0741e7ae04efc178650dff2f1f08e2d7b80c16a0c36291b962fa8acc069d71/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:44 np0005603787 podman[76140]: 2026-01-31 09:57:44.237171623 +0000 UTC m=+0.114285259 container init 1e8097e30fda3dfed4824e41f871e029db832fca29d33547c4483e35f634da3f (image=quay.io/ceph/ceph:v20, name=modest_tu, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:57:44 np0005603787 podman[76140]: 2026-01-31 09:57:44.141720398 +0000 UTC m=+0.018834054 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:57:44 np0005603787 podman[76140]: 2026-01-31 09:57:44.242440303 +0000 UTC m=+0.119553959 container start 1e8097e30fda3dfed4824e41f871e029db832fca29d33547c4483e35f634da3f (image=quay.io/ceph/ceph:v20, name=modest_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:57:44 np0005603787 podman[76140]: 2026-01-31 09:57:44.246259946 +0000 UTC m=+0.123373602 container attach 1e8097e30fda3dfed4824e41f871e029db832fca29d33547c4483e35f634da3f (image=quay.io/ceph/ceph:v20, name=modest_tu, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 04:57:44 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "orchestrator"} v 0)
Jan 31 04:57:44 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1205321549' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Jan 31 04:57:44 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:44 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:44 np0005603787 ceph-mon[75160]: Found migration_current of "None". Setting to last migration.
Jan 31 04:57:44 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:44 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:44 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mdmqaq/mirror_snapshot_schedule"} : dispatch
Jan 31 04:57:44 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mdmqaq/trash_purge_schedule"} : dispatch
Jan 31 04:57:44 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/1205321549' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Jan 31 04:57:44 np0005603787 ceph-mgr[75453]: [cephadm INFO cherrypy.error] [31/Jan/2026:09:57:44] ENGINE Bus STARTING
Jan 31 04:57:44 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : [31/Jan/2026:09:57:44] ENGINE Bus STARTING
Jan 31 04:57:44 np0005603787 ceph-mgr[75453]: [cephadm INFO cherrypy.error] [31/Jan/2026:09:57:44] ENGINE Serving on https://192.168.122.100:7150
Jan 31 04:57:44 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : [31/Jan/2026:09:57:44] ENGINE Serving on https://192.168.122.100:7150
Jan 31 04:57:44 np0005603787 ceph-mgr[75453]: [cephadm INFO cherrypy.error] [31/Jan/2026:09:57:44] ENGINE Client ('192.168.122.100', 37184) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 31 04:57:44 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : [31/Jan/2026:09:57:44] ENGINE Client ('192.168.122.100', 37184) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 31 04:57:45 np0005603787 ceph-mgr[75453]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 04:57:45 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1205321549' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Jan 31 04:57:45 np0005603787 modest_tu[76157]: module 'orchestrator' is already enabled (always-on)
Jan 31 04:57:45 np0005603787 ceph-mgr[75453]: [cephadm INFO cherrypy.error] [31/Jan/2026:09:57:45] ENGINE Serving on http://192.168.122.100:8765
Jan 31 04:57:45 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : [31/Jan/2026:09:57:45] ENGINE Serving on http://192.168.122.100:8765
Jan 31 04:57:45 np0005603787 ceph-mgr[75453]: [cephadm INFO cherrypy.error] [31/Jan/2026:09:57:45] ENGINE Bus STARTED
Jan 31 04:57:45 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.mdmqaq(active, since 2s)
Jan 31 04:57:45 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : [31/Jan/2026:09:57:45] ENGINE Bus STARTED
Jan 31 04:57:45 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 31 04:57:45 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 04:57:45 np0005603787 systemd[1]: libpod-1e8097e30fda3dfed4824e41f871e029db832fca29d33547c4483e35f634da3f.scope: Deactivated successfully.
Jan 31 04:57:45 np0005603787 podman[76140]: 2026-01-31 09:57:45.064119379 +0000 UTC m=+0.941233005 container died 1e8097e30fda3dfed4824e41f871e029db832fca29d33547c4483e35f634da3f (image=quay.io/ceph/ceph:v20, name=modest_tu, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 04:57:45 np0005603787 systemd[1]: var-lib-containers-storage-overlay-cb0741e7ae04efc178650dff2f1f08e2d7b80c16a0c36291b962fa8acc069d71-merged.mount: Deactivated successfully.
Jan 31 04:57:45 np0005603787 podman[76140]: 2026-01-31 09:57:45.100989806 +0000 UTC m=+0.978103432 container remove 1e8097e30fda3dfed4824e41f871e029db832fca29d33547c4483e35f634da3f (image=quay.io/ceph/ceph:v20, name=modest_tu, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 04:57:45 np0005603787 systemd[1]: libpod-conmon-1e8097e30fda3dfed4824e41f871e029db832fca29d33547c4483e35f634da3f.scope: Deactivated successfully.
Jan 31 04:57:45 np0005603787 podman[76218]: 2026-01-31 09:57:45.17440893 +0000 UTC m=+0.056066121 container create 7dece93b295ec31328f6606478397ced2bc1cf4df7a345c33460f06d2c407a49 (image=quay.io/ceph/ceph:v20, name=objective_ramanujan, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 04:57:45 np0005603787 systemd[1]: Started libpod-conmon-7dece93b295ec31328f6606478397ced2bc1cf4df7a345c33460f06d2c407a49.scope.
Jan 31 04:57:45 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:57:45 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f63f1151efa4bf4e1eac554a97a129168ffc4a0f44365a0163931f4d9fbe9a1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:45 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f63f1151efa4bf4e1eac554a97a129168ffc4a0f44365a0163931f4d9fbe9a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:45 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f63f1151efa4bf4e1eac554a97a129168ffc4a0f44365a0163931f4d9fbe9a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:45 np0005603787 podman[76218]: 2026-01-31 09:57:45.14751053 +0000 UTC m=+0.029167741 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:57:45 np0005603787 podman[76218]: 2026-01-31 09:57:45.244932116 +0000 UTC m=+0.126589337 container init 7dece93b295ec31328f6606478397ced2bc1cf4df7a345c33460f06d2c407a49 (image=quay.io/ceph/ceph:v20, name=objective_ramanujan, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 04:57:45 np0005603787 podman[76218]: 2026-01-31 09:57:45.248740258 +0000 UTC m=+0.130397439 container start 7dece93b295ec31328f6606478397ced2bc1cf4df7a345c33460f06d2c407a49 (image=quay.io/ceph/ceph:v20, name=objective_ramanujan, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:57:45 np0005603787 podman[76218]: 2026-01-31 09:57:45.25290861 +0000 UTC m=+0.134565841 container attach 7dece93b295ec31328f6606478397ced2bc1cf4df7a345c33460f06d2c407a49 (image=quay.io/ceph/ceph:v20, name=objective_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 04:57:45 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:57:45 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Jan 31 04:57:45 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:45 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 31 04:57:45 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 04:57:45 np0005603787 systemd[1]: libpod-7dece93b295ec31328f6606478397ced2bc1cf4df7a345c33460f06d2c407a49.scope: Deactivated successfully.
Jan 31 04:57:45 np0005603787 podman[76218]: 2026-01-31 09:57:45.698644426 +0000 UTC m=+0.580301617 container died 7dece93b295ec31328f6606478397ced2bc1cf4df7a345c33460f06d2c407a49 (image=quay.io/ceph/ceph:v20, name=objective_ramanujan, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 04:57:45 np0005603787 systemd[1]: var-lib-containers-storage-overlay-3f63f1151efa4bf4e1eac554a97a129168ffc4a0f44365a0163931f4d9fbe9a1-merged.mount: Deactivated successfully.
Jan 31 04:57:45 np0005603787 podman[76218]: 2026-01-31 09:57:45.739279103 +0000 UTC m=+0.620936294 container remove 7dece93b295ec31328f6606478397ced2bc1cf4df7a345c33460f06d2c407a49 (image=quay.io/ceph/ceph:v20, name=objective_ramanujan, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 04:57:45 np0005603787 systemd[1]: libpod-conmon-7dece93b295ec31328f6606478397ced2bc1cf4df7a345c33460f06d2c407a49.scope: Deactivated successfully.
Jan 31 04:57:45 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/1205321549' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Jan 31 04:57:45 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:45 np0005603787 podman[76272]: 2026-01-31 09:57:45.792506737 +0000 UTC m=+0.037100753 container create 48bc27c0cc8285470222688c56bc3d9b4a0471a22302f6a4f8fb147b1384861c (image=quay.io/ceph/ceph:v20, name=suspicious_hellman, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:57:45 np0005603787 ceph-mgr[75453]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 04:57:45 np0005603787 systemd[1]: Started libpod-conmon-48bc27c0cc8285470222688c56bc3d9b4a0471a22302f6a4f8fb147b1384861c.scope.
Jan 31 04:57:45 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:57:45 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23a1b2a0a3b8a4492ab968ce9165c80773cdcc8f43c82496c4d595d645805efd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:45 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23a1b2a0a3b8a4492ab968ce9165c80773cdcc8f43c82496c4d595d645805efd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:45 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23a1b2a0a3b8a4492ab968ce9165c80773cdcc8f43c82496c4d595d645805efd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:45 np0005603787 podman[76272]: 2026-01-31 09:57:45.862065698 +0000 UTC m=+0.106659734 container init 48bc27c0cc8285470222688c56bc3d9b4a0471a22302f6a4f8fb147b1384861c (image=quay.io/ceph/ceph:v20, name=suspicious_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True)
Jan 31 04:57:45 np0005603787 podman[76272]: 2026-01-31 09:57:45.865642335 +0000 UTC m=+0.110236351 container start 48bc27c0cc8285470222688c56bc3d9b4a0471a22302f6a4f8fb147b1384861c (image=quay.io/ceph/ceph:v20, name=suspicious_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:57:45 np0005603787 podman[76272]: 2026-01-31 09:57:45.869055455 +0000 UTC m=+0.113649481 container attach 48bc27c0cc8285470222688c56bc3d9b4a0471a22302f6a4f8fb147b1384861c (image=quay.io/ceph/ceph:v20, name=suspicious_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:57:45 np0005603787 podman[76272]: 2026-01-31 09:57:45.776659223 +0000 UTC m=+0.021253329 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:57:46 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019900159 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 04:57:46 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:57:46 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Jan 31 04:57:46 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:46 np0005603787 ceph-mgr[75453]: [cephadm INFO root] Set ssh ssh_user
Jan 31 04:57:46 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Jan 31 04:57:46 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Jan 31 04:57:46 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:46 np0005603787 ceph-mgr[75453]: [cephadm INFO root] Set ssh ssh_config
Jan 31 04:57:46 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Jan 31 04:57:46 np0005603787 ceph-mgr[75453]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Jan 31 04:57:46 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Jan 31 04:57:46 np0005603787 suspicious_hellman[76288]: ssh user set to ceph-admin. sudo will be used
Jan 31 04:57:46 np0005603787 systemd[1]: libpod-48bc27c0cc8285470222688c56bc3d9b4a0471a22302f6a4f8fb147b1384861c.scope: Deactivated successfully.
Jan 31 04:57:46 np0005603787 conmon[76288]: conmon 48bc27c0cc8285470222 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-48bc27c0cc8285470222688c56bc3d9b4a0471a22302f6a4f8fb147b1384861c.scope/container/memory.events
Jan 31 04:57:46 np0005603787 podman[76272]: 2026-01-31 09:57:46.355304295 +0000 UTC m=+0.599898321 container died 48bc27c0cc8285470222688c56bc3d9b4a0471a22302f6a4f8fb147b1384861c (image=quay.io/ceph/ceph:v20, name=suspicious_hellman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:57:46 np0005603787 systemd[1]: var-lib-containers-storage-overlay-23a1b2a0a3b8a4492ab968ce9165c80773cdcc8f43c82496c4d595d645805efd-merged.mount: Deactivated successfully.
Jan 31 04:57:46 np0005603787 podman[76272]: 2026-01-31 09:57:46.389197803 +0000 UTC m=+0.633791819 container remove 48bc27c0cc8285470222688c56bc3d9b4a0471a22302f6a4f8fb147b1384861c (image=quay.io/ceph/ceph:v20, name=suspicious_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Jan 31 04:57:46 np0005603787 systemd[1]: libpod-conmon-48bc27c0cc8285470222688c56bc3d9b4a0471a22302f6a4f8fb147b1384861c.scope: Deactivated successfully.
Jan 31 04:57:46 np0005603787 podman[76326]: 2026-01-31 09:57:46.442672674 +0000 UTC m=+0.041111912 container create c45170d739e8124418d7bff53ccb2030c5891d1a0dedfa79d6fcb79d28d1ce3e (image=quay.io/ceph/ceph:v20, name=serene_kapitsa, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 04:57:46 np0005603787 systemd[1]: Started libpod-conmon-c45170d739e8124418d7bff53ccb2030c5891d1a0dedfa79d6fcb79d28d1ce3e.scope.
Jan 31 04:57:46 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:57:46 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0f0ef49d11526c1f0a70c4df7fa23ab768b7a5f68fafa3f1d7f328de2f739f8/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:46 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0f0ef49d11526c1f0a70c4df7fa23ab768b7a5f68fafa3f1d7f328de2f739f8/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:46 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0f0ef49d11526c1f0a70c4df7fa23ab768b7a5f68fafa3f1d7f328de2f739f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:46 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0f0ef49d11526c1f0a70c4df7fa23ab768b7a5f68fafa3f1d7f328de2f739f8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:46 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0f0ef49d11526c1f0a70c4df7fa23ab768b7a5f68fafa3f1d7f328de2f739f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:46 np0005603787 podman[76326]: 2026-01-31 09:57:46.513699814 +0000 UTC m=+0.112139032 container init c45170d739e8124418d7bff53ccb2030c5891d1a0dedfa79d6fcb79d28d1ce3e (image=quay.io/ceph/ceph:v20, name=serene_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:57:46 np0005603787 podman[76326]: 2026-01-31 09:57:46.418860356 +0000 UTC m=+0.017299614 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:57:46 np0005603787 podman[76326]: 2026-01-31 09:57:46.519488009 +0000 UTC m=+0.117927217 container start c45170d739e8124418d7bff53ccb2030c5891d1a0dedfa79d6fcb79d28d1ce3e (image=quay.io/ceph/ceph:v20, name=serene_kapitsa, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True)
Jan 31 04:57:46 np0005603787 podman[76326]: 2026-01-31 09:57:46.523774624 +0000 UTC m=+0.122213842 container attach c45170d739e8124418d7bff53ccb2030c5891d1a0dedfa79d6fcb79d28d1ce3e (image=quay.io/ceph/ceph:v20, name=serene_kapitsa, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:57:46 np0005603787 ceph-mon[75160]: [31/Jan/2026:09:57:44] ENGINE Bus STARTING
Jan 31 04:57:46 np0005603787 ceph-mon[75160]: [31/Jan/2026:09:57:44] ENGINE Serving on https://192.168.122.100:7150
Jan 31 04:57:46 np0005603787 ceph-mon[75160]: [31/Jan/2026:09:57:44] ENGINE Client ('192.168.122.100', 37184) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 31 04:57:46 np0005603787 ceph-mon[75160]: [31/Jan/2026:09:57:45] ENGINE Serving on http://192.168.122.100:8765
Jan 31 04:57:46 np0005603787 ceph-mon[75160]: [31/Jan/2026:09:57:45] ENGINE Bus STARTED
Jan 31 04:57:46 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:46 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:46 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:57:46 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Jan 31 04:57:46 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:46 np0005603787 ceph-mgr[75453]: [cephadm INFO root] Set ssh ssh_identity_key
Jan 31 04:57:46 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Jan 31 04:57:46 np0005603787 ceph-mgr[75453]: [cephadm INFO root] Set ssh private key
Jan 31 04:57:46 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : Set ssh private key
Jan 31 04:57:46 np0005603787 systemd[1]: libpod-c45170d739e8124418d7bff53ccb2030c5891d1a0dedfa79d6fcb79d28d1ce3e.scope: Deactivated successfully.
Jan 31 04:57:46 np0005603787 podman[76326]: 2026-01-31 09:57:46.964798694 +0000 UTC m=+0.563237892 container died c45170d739e8124418d7bff53ccb2030c5891d1a0dedfa79d6fcb79d28d1ce3e (image=quay.io/ceph/ceph:v20, name=serene_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:57:46 np0005603787 systemd[1]: var-lib-containers-storage-overlay-d0f0ef49d11526c1f0a70c4df7fa23ab768b7a5f68fafa3f1d7f328de2f739f8-merged.mount: Deactivated successfully.
Jan 31 04:57:47 np0005603787 podman[76326]: 2026-01-31 09:57:47.014275757 +0000 UTC m=+0.612714955 container remove c45170d739e8124418d7bff53ccb2030c5891d1a0dedfa79d6fcb79d28d1ce3e (image=quay.io/ceph/ceph:v20, name=serene_kapitsa, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 04:57:47 np0005603787 systemd[1]: libpod-conmon-c45170d739e8124418d7bff53ccb2030c5891d1a0dedfa79d6fcb79d28d1ce3e.scope: Deactivated successfully.
Jan 31 04:57:47 np0005603787 ceph-mgr[75453]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 04:57:47 np0005603787 podman[76381]: 2026-01-31 09:57:47.071062567 +0000 UTC m=+0.036843697 container create 23dcd26122d6461c116021c1cac40549f5724033f1bc445c7b1800fe5b916966 (image=quay.io/ceph/ceph:v20, name=objective_chandrasekhar, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:57:47 np0005603787 systemd[1]: Started libpod-conmon-23dcd26122d6461c116021c1cac40549f5724033f1bc445c7b1800fe5b916966.scope.
Jan 31 04:57:47 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:57:47 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d6d5b77a02fe08544fc96d3b50eb5c560988f0edecdeba1b83640f26a3552dc/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:47 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d6d5b77a02fe08544fc96d3b50eb5c560988f0edecdeba1b83640f26a3552dc/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:47 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d6d5b77a02fe08544fc96d3b50eb5c560988f0edecdeba1b83640f26a3552dc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:47 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d6d5b77a02fe08544fc96d3b50eb5c560988f0edecdeba1b83640f26a3552dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:47 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d6d5b77a02fe08544fc96d3b50eb5c560988f0edecdeba1b83640f26a3552dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:47 np0005603787 podman[76381]: 2026-01-31 09:57:47.137207126 +0000 UTC m=+0.102988286 container init 23dcd26122d6461c116021c1cac40549f5724033f1bc445c7b1800fe5b916966 (image=quay.io/ceph/ceph:v20, name=objective_chandrasekhar, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:57:47 np0005603787 podman[76381]: 2026-01-31 09:57:47.142727724 +0000 UTC m=+0.108508874 container start 23dcd26122d6461c116021c1cac40549f5724033f1bc445c7b1800fe5b916966 (image=quay.io/ceph/ceph:v20, name=objective_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:57:47 np0005603787 podman[76381]: 2026-01-31 09:57:47.145814807 +0000 UTC m=+0.111595967 container attach 23dcd26122d6461c116021c1cac40549f5724033f1bc445c7b1800fe5b916966 (image=quay.io/ceph/ceph:v20, name=objective_chandrasekhar, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 04:57:47 np0005603787 podman[76381]: 2026-01-31 09:57:47.055887291 +0000 UTC m=+0.021668421 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:57:47 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:57:47 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Jan 31 04:57:47 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:47 np0005603787 ceph-mgr[75453]: [cephadm INFO root] Set ssh ssh_identity_pub
Jan 31 04:57:47 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Jan 31 04:57:47 np0005603787 systemd[1]: libpod-23dcd26122d6461c116021c1cac40549f5724033f1bc445c7b1800fe5b916966.scope: Deactivated successfully.
Jan 31 04:57:47 np0005603787 conmon[76397]: conmon 23dcd26122d6461c1160 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-23dcd26122d6461c116021c1cac40549f5724033f1bc445c7b1800fe5b916966.scope/container/memory.events
Jan 31 04:57:47 np0005603787 podman[76381]: 2026-01-31 09:57:47.565953598 +0000 UTC m=+0.531734758 container died 23dcd26122d6461c116021c1cac40549f5724033f1bc445c7b1800fe5b916966 (image=quay.io/ceph/ceph:v20, name=objective_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 04:57:47 np0005603787 systemd[1]: var-lib-containers-storage-overlay-6d6d5b77a02fe08544fc96d3b50eb5c560988f0edecdeba1b83640f26a3552dc-merged.mount: Deactivated successfully.
Jan 31 04:57:47 np0005603787 podman[76381]: 2026-01-31 09:57:47.60449733 +0000 UTC m=+0.570278450 container remove 23dcd26122d6461c116021c1cac40549f5724033f1bc445c7b1800fe5b916966 (image=quay.io/ceph/ceph:v20, name=objective_chandrasekhar, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:57:47 np0005603787 systemd[1]: libpod-conmon-23dcd26122d6461c116021c1cac40549f5724033f1bc445c7b1800fe5b916966.scope: Deactivated successfully.
Jan 31 04:57:47 np0005603787 podman[76435]: 2026-01-31 09:57:47.658897414 +0000 UTC m=+0.039103866 container create 2035518f82bf9a398af80db7d6b480a4f686f9d8cdbe8755d5e02102a3e51b29 (image=quay.io/ceph/ceph:v20, name=admiring_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 04:57:47 np0005603787 systemd[1]: Started libpod-conmon-2035518f82bf9a398af80db7d6b480a4f686f9d8cdbe8755d5e02102a3e51b29.scope.
Jan 31 04:57:47 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:57:47 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07d83a2fb19d149531111ea5c98cb077d8c2c9d9214336e1370bd157269f4743/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:47 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07d83a2fb19d149531111ea5c98cb077d8c2c9d9214336e1370bd157269f4743/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:47 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07d83a2fb19d149531111ea5c98cb077d8c2c9d9214336e1370bd157269f4743/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:47 np0005603787 podman[76435]: 2026-01-31 09:57:47.728025044 +0000 UTC m=+0.108231506 container init 2035518f82bf9a398af80db7d6b480a4f686f9d8cdbe8755d5e02102a3e51b29 (image=quay.io/ceph/ceph:v20, name=admiring_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 04:57:47 np0005603787 podman[76435]: 2026-01-31 09:57:47.732557536 +0000 UTC m=+0.112763988 container start 2035518f82bf9a398af80db7d6b480a4f686f9d8cdbe8755d5e02102a3e51b29 (image=quay.io/ceph/ceph:v20, name=admiring_bell, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 04:57:47 np0005603787 podman[76435]: 2026-01-31 09:57:47.638707245 +0000 UTC m=+0.018913717 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:57:47 np0005603787 podman[76435]: 2026-01-31 09:57:47.735624067 +0000 UTC m=+0.115830539 container attach 2035518f82bf9a398af80db7d6b480a4f686f9d8cdbe8755d5e02102a3e51b29 (image=quay.io/ceph/ceph:v20, name=admiring_bell, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:57:47 np0005603787 ceph-mon[75160]: Set ssh ssh_user
Jan 31 04:57:47 np0005603787 ceph-mon[75160]: Set ssh ssh_config
Jan 31 04:57:47 np0005603787 ceph-mon[75160]: ssh user set to ceph-admin. sudo will be used
Jan 31 04:57:47 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:47 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:47 np0005603787 ceph-mgr[75453]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 04:57:48 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:57:48 np0005603787 admiring_bell[76452]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6WGJUU/m3tPsBg6TET4K4643dC2Dhi0ZmOBAr2aKgUsAnhcLez0HQHYWrggeMFRJablKYrpjic2oRstNpDgzWrUCWTV/xh7gFoto6nUpLCLdZmG+iDgIkpdodsBdaYJTyqriJTOG1k91G3bJ2mW/Stc4ANQzreWrWcn7GAxwXxepI4nKTYYURghsos2TZXxYZiJpTlIFtkl6PGFM/Jr6XrP79vfH2HCsnv8e9PTZskDuq7gudpnIZwaql0sZwnvlQVYfpySYm3TLF16gIytK26NgPgk0aGCtZdBdqasCDR7b+ZX2e+5/6DlRw3vw2c+iGi54a5EWFm7Zmdm4tQmo5i0n67AjBmrUU10AwiXN8PXNIECBV+K+zpCcA80ckV1Q3DawzN8b2053SZfTICeBS14JewArVk6jeEH/3LM5VSCgA+lVQnUPfwcipoGuXVCb1AIq7MKyhyO3QXhTixBZbrfGNlYQH7tH0gtSRP3ubR8un4F+5/BywxdmxYAiA1ac= zuul@controller
Jan 31 04:57:48 np0005603787 systemd[1]: libpod-2035518f82bf9a398af80db7d6b480a4f686f9d8cdbe8755d5e02102a3e51b29.scope: Deactivated successfully.
Jan 31 04:57:48 np0005603787 podman[76435]: 2026-01-31 09:57:48.224064066 +0000 UTC m=+0.604270508 container died 2035518f82bf9a398af80db7d6b480a4f686f9d8cdbe8755d5e02102a3e51b29 (image=quay.io/ceph/ceph:v20, name=admiring_bell, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:57:48 np0005603787 systemd[1]: var-lib-containers-storage-overlay-07d83a2fb19d149531111ea5c98cb077d8c2c9d9214336e1370bd157269f4743-merged.mount: Deactivated successfully.
Jan 31 04:57:48 np0005603787 podman[76435]: 2026-01-31 09:57:48.277603688 +0000 UTC m=+0.657810140 container remove 2035518f82bf9a398af80db7d6b480a4f686f9d8cdbe8755d5e02102a3e51b29 (image=quay.io/ceph/ceph:v20, name=admiring_bell, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:57:48 np0005603787 systemd[1]: libpod-conmon-2035518f82bf9a398af80db7d6b480a4f686f9d8cdbe8755d5e02102a3e51b29.scope: Deactivated successfully.
Jan 31 04:57:48 np0005603787 podman[76491]: 2026-01-31 09:57:48.330818302 +0000 UTC m=+0.037675789 container create fe69cb4fb6a895016a5382511834d48944a635a79ab42283d5660ca71c6afc65 (image=quay.io/ceph/ceph:v20, name=naughty_sutherland, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:57:48 np0005603787 systemd[1]: Started libpod-conmon-fe69cb4fb6a895016a5382511834d48944a635a79ab42283d5660ca71c6afc65.scope.
Jan 31 04:57:48 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:57:48 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77df71694612501e57dbf5e0e6db81646b7da5dac2332ad6c42f0ac6fbcac76d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:48 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77df71694612501e57dbf5e0e6db81646b7da5dac2332ad6c42f0ac6fbcac76d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:48 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77df71694612501e57dbf5e0e6db81646b7da5dac2332ad6c42f0ac6fbcac76d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:48 np0005603787 podman[76491]: 2026-01-31 09:57:48.395918905 +0000 UTC m=+0.102776402 container init fe69cb4fb6a895016a5382511834d48944a635a79ab42283d5660ca71c6afc65 (image=quay.io/ceph/ceph:v20, name=naughty_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:57:48 np0005603787 podman[76491]: 2026-01-31 09:57:48.400459276 +0000 UTC m=+0.107316763 container start fe69cb4fb6a895016a5382511834d48944a635a79ab42283d5660ca71c6afc65 (image=quay.io/ceph/ceph:v20, name=naughty_sutherland, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:57:48 np0005603787 podman[76491]: 2026-01-31 09:57:48.404481904 +0000 UTC m=+0.111339401 container attach fe69cb4fb6a895016a5382511834d48944a635a79ab42283d5660ca71c6afc65 (image=quay.io/ceph/ceph:v20, name=naughty_sutherland, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 04:57:48 np0005603787 podman[76491]: 2026-01-31 09:57:48.313355176 +0000 UTC m=+0.020212743 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:57:48 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:57:48 np0005603787 ceph-mon[75160]: Set ssh ssh_identity_key
Jan 31 04:57:48 np0005603787 ceph-mon[75160]: Set ssh private key
Jan 31 04:57:48 np0005603787 ceph-mon[75160]: Set ssh ssh_identity_pub
Jan 31 04:57:48 np0005603787 systemd-logind[786]: New session 21 of user ceph-admin.
Jan 31 04:57:48 np0005603787 systemd[1]: Created slice User Slice of UID 42477.
Jan 31 04:57:48 np0005603787 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 31 04:57:49 np0005603787 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 31 04:57:49 np0005603787 systemd[1]: Starting User Manager for UID 42477...
Jan 31 04:57:49 np0005603787 ceph-mgr[75453]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 04:57:49 np0005603787 systemd[76537]: Queued start job for default target Main User Target.
Jan 31 04:57:49 np0005603787 systemd[76537]: Created slice User Application Slice.
Jan 31 04:57:49 np0005603787 systemd[76537]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 04:57:49 np0005603787 systemd[76537]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 04:57:49 np0005603787 systemd[76537]: Reached target Paths.
Jan 31 04:57:49 np0005603787 systemd[76537]: Reached target Timers.
Jan 31 04:57:49 np0005603787 systemd[76537]: Starting D-Bus User Message Bus Socket...
Jan 31 04:57:49 np0005603787 systemd[76537]: Starting Create User's Volatile Files and Directories...
Jan 31 04:57:49 np0005603787 systemd[76537]: Finished Create User's Volatile Files and Directories.
Jan 31 04:57:49 np0005603787 systemd[76537]: Listening on D-Bus User Message Bus Socket.
Jan 31 04:57:49 np0005603787 systemd[76537]: Reached target Sockets.
Jan 31 04:57:49 np0005603787 systemd[76537]: Reached target Basic System.
Jan 31 04:57:49 np0005603787 systemd[76537]: Reached target Main User Target.
Jan 31 04:57:49 np0005603787 systemd[76537]: Startup finished in 108ms.
Jan 31 04:57:49 np0005603787 systemd[1]: Started User Manager for UID 42477.
Jan 31 04:57:49 np0005603787 systemd[1]: Started Session 21 of User ceph-admin.
Jan 31 04:57:49 np0005603787 systemd-logind[786]: New session 23 of user ceph-admin.
Jan 31 04:57:49 np0005603787 systemd[1]: Started Session 23 of User ceph-admin.
Jan 31 04:57:49 np0005603787 systemd-logind[786]: New session 24 of user ceph-admin.
Jan 31 04:57:49 np0005603787 systemd[1]: Started Session 24 of User ceph-admin.
Jan 31 04:57:49 np0005603787 systemd-logind[786]: New session 25 of user ceph-admin.
Jan 31 04:57:49 np0005603787 systemd[1]: Started Session 25 of User ceph-admin.
Jan 31 04:57:49 np0005603787 ceph-mgr[75453]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 04:57:49 np0005603787 ceph-mgr[75453]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Jan 31 04:57:49 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Jan 31 04:57:50 np0005603787 systemd-logind[786]: New session 26 of user ceph-admin.
Jan 31 04:57:50 np0005603787 systemd[1]: Started Session 26 of User ceph-admin.
Jan 31 04:57:50 np0005603787 systemd-logind[786]: New session 27 of user ceph-admin.
Jan 31 04:57:50 np0005603787 systemd[1]: Started Session 27 of User ceph-admin.
Jan 31 04:57:50 np0005603787 systemd-logind[786]: New session 28 of user ceph-admin.
Jan 31 04:57:50 np0005603787 systemd[1]: Started Session 28 of User ceph-admin.
Jan 31 04:57:50 np0005603787 ceph-mon[75160]: Deploying cephadm binary to compute-0
Jan 31 04:57:51 np0005603787 ceph-mgr[75453]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 04:57:51 np0005603787 systemd-logind[786]: New session 29 of user ceph-admin.
Jan 31 04:57:51 np0005603787 systemd[1]: Started Session 29 of User ceph-admin.
Jan 31 04:57:51 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052550 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 04:57:51 np0005603787 systemd-logind[786]: New session 30 of user ceph-admin.
Jan 31 04:57:51 np0005603787 systemd[1]: Started Session 30 of User ceph-admin.
Jan 31 04:57:51 np0005603787 systemd-logind[786]: New session 31 of user ceph-admin.
Jan 31 04:57:51 np0005603787 systemd[1]: Started Session 31 of User ceph-admin.
Jan 31 04:57:51 np0005603787 ceph-mgr[75453]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 04:57:53 np0005603787 ceph-mgr[75453]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 04:57:53 np0005603787 systemd-logind[786]: New session 32 of user ceph-admin.
Jan 31 04:57:53 np0005603787 systemd[1]: Started Session 32 of User ceph-admin.
Jan 31 04:57:53 np0005603787 systemd-logind[786]: New session 33 of user ceph-admin.
Jan 31 04:57:53 np0005603787 systemd[1]: Started Session 33 of User ceph-admin.
Jan 31 04:57:53 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 31 04:57:53 np0005603787 ceph-mgr[75453]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 04:57:53 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:53 np0005603787 ceph-mgr[75453]: [cephadm INFO root] Added host compute-0
Jan 31 04:57:53 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 31 04:57:53 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 31 04:57:53 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 04:57:53 np0005603787 naughty_sutherland[76507]: Added host 'compute-0' with addr '192.168.122.100'
Jan 31 04:57:53 np0005603787 systemd[1]: libpod-fe69cb4fb6a895016a5382511834d48944a635a79ab42283d5660ca71c6afc65.scope: Deactivated successfully.
Jan 31 04:57:53 np0005603787 podman[76491]: 2026-01-31 09:57:53.827970065 +0000 UTC m=+5.534827562 container died fe69cb4fb6a895016a5382511834d48944a635a79ab42283d5660ca71c6afc65 (image=quay.io/ceph/ceph:v20, name=naughty_sutherland, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 04:57:53 np0005603787 systemd[1]: var-lib-containers-storage-overlay-77df71694612501e57dbf5e0e6db81646b7da5dac2332ad6c42f0ac6fbcac76d-merged.mount: Deactivated successfully.
Jan 31 04:57:53 np0005603787 podman[76491]: 2026-01-31 09:57:53.875961279 +0000 UTC m=+5.582818766 container remove fe69cb4fb6a895016a5382511834d48944a635a79ab42283d5660ca71c6afc65 (image=quay.io/ceph/ceph:v20, name=naughty_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 04:57:53 np0005603787 systemd[1]: libpod-conmon-fe69cb4fb6a895016a5382511834d48944a635a79ab42283d5660ca71c6afc65.scope: Deactivated successfully.
Jan 31 04:57:53 np0005603787 podman[76939]: 2026-01-31 09:57:53.928637038 +0000 UTC m=+0.037663568 container create 6cb3d76fa9b58b2c5ac697400dc6f6b9e54a09bc1797c1a01e0209d14c0c9962 (image=quay.io/ceph/ceph:v20, name=nifty_ride, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:57:53 np0005603787 systemd[1]: Started libpod-conmon-6cb3d76fa9b58b2c5ac697400dc6f6b9e54a09bc1797c1a01e0209d14c0c9962.scope.
Jan 31 04:57:53 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:57:54 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aeb6544eebe4d0dd9057f1bf9c2164bdc9d69fab550287771457bd9dc564064/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:54 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aeb6544eebe4d0dd9057f1bf9c2164bdc9d69fab550287771457bd9dc564064/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:54 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aeb6544eebe4d0dd9057f1bf9c2164bdc9d69fab550287771457bd9dc564064/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:54 np0005603787 podman[76939]: 2026-01-31 09:57:53.909687751 +0000 UTC m=+0.018714321 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:57:54 np0005603787 podman[76939]: 2026-01-31 09:57:54.014558168 +0000 UTC m=+0.123584698 container init 6cb3d76fa9b58b2c5ac697400dc6f6b9e54a09bc1797c1a01e0209d14c0c9962 (image=quay.io/ceph/ceph:v20, name=nifty_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 04:57:54 np0005603787 podman[76939]: 2026-01-31 09:57:54.0202545 +0000 UTC m=+0.129281020 container start 6cb3d76fa9b58b2c5ac697400dc6f6b9e54a09bc1797c1a01e0209d14c0c9962 (image=quay.io/ceph/ceph:v20, name=nifty_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 04:57:54 np0005603787 podman[76939]: 2026-01-31 09:57:54.024673928 +0000 UTC m=+0.133700478 container attach 6cb3d76fa9b58b2c5ac697400dc6f6b9e54a09bc1797c1a01e0209d14c0c9962 (image=quay.io/ceph/ceph:v20, name=nifty_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 04:57:54 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:57:54 np0005603787 ceph-mgr[75453]: [cephadm INFO root] Saving service mon spec with placement count:5
Jan 31 04:57:54 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Jan 31 04:57:54 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 31 04:57:54 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:54 np0005603787 nifty_ride[76969]: Scheduled mon update...
Jan 31 04:57:54 np0005603787 systemd[1]: libpod-6cb3d76fa9b58b2c5ac697400dc6f6b9e54a09bc1797c1a01e0209d14c0c9962.scope: Deactivated successfully.
Jan 31 04:57:54 np0005603787 podman[76939]: 2026-01-31 09:57:54.450538133 +0000 UTC m=+0.559564663 container died 6cb3d76fa9b58b2c5ac697400dc6f6b9e54a09bc1797c1a01e0209d14c0c9962 (image=quay.io/ceph/ceph:v20, name=nifty_ride, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 04:57:54 np0005603787 systemd[1]: var-lib-containers-storage-overlay-0aeb6544eebe4d0dd9057f1bf9c2164bdc9d69fab550287771457bd9dc564064-merged.mount: Deactivated successfully.
Jan 31 04:57:54 np0005603787 podman[76939]: 2026-01-31 09:57:54.493617555 +0000 UTC m=+0.602644095 container remove 6cb3d76fa9b58b2c5ac697400dc6f6b9e54a09bc1797c1a01e0209d14c0c9962 (image=quay.io/ceph/ceph:v20, name=nifty_ride, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:57:54 np0005603787 systemd[1]: libpod-conmon-6cb3d76fa9b58b2c5ac697400dc6f6b9e54a09bc1797c1a01e0209d14c0c9962.scope: Deactivated successfully.
Jan 31 04:57:54 np0005603787 podman[77032]: 2026-01-31 09:57:54.551677489 +0000 UTC m=+0.043768042 container create b4d8eb64dc25998726b6439fb4ced1d0bed25fdf67f520dca185bd33a127ea13 (image=quay.io/ceph/ceph:v20, name=boring_varahamihira, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:57:54 np0005603787 systemd[1]: Started libpod-conmon-b4d8eb64dc25998726b6439fb4ced1d0bed25fdf67f520dca185bd33a127ea13.scope.
Jan 31 04:57:54 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:57:54 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54d2a361d5f56bed0bc36ecac72d83e8796d848aafad50731de67fc2dcb825ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:54 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54d2a361d5f56bed0bc36ecac72d83e8796d848aafad50731de67fc2dcb825ef/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:54 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54d2a361d5f56bed0bc36ecac72d83e8796d848aafad50731de67fc2dcb825ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:54 np0005603787 podman[77032]: 2026-01-31 09:57:54.612333872 +0000 UTC m=+0.104424425 container init b4d8eb64dc25998726b6439fb4ced1d0bed25fdf67f520dca185bd33a127ea13 (image=quay.io/ceph/ceph:v20, name=boring_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:57:54 np0005603787 podman[77032]: 2026-01-31 09:57:54.616249387 +0000 UTC m=+0.108339940 container start b4d8eb64dc25998726b6439fb4ced1d0bed25fdf67f520dca185bd33a127ea13 (image=quay.io/ceph/ceph:v20, name=boring_varahamihira, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 04:57:54 np0005603787 podman[77032]: 2026-01-31 09:57:54.619839793 +0000 UTC m=+0.111930366 container attach b4d8eb64dc25998726b6439fb4ced1d0bed25fdf67f520dca185bd33a127ea13 (image=quay.io/ceph/ceph:v20, name=boring_varahamihira, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:57:54 np0005603787 podman[77032]: 2026-01-31 09:57:54.530532713 +0000 UTC m=+0.022623316 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:57:54 np0005603787 podman[76994]: 2026-01-31 09:57:54.702744971 +0000 UTC m=+0.571928694 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:57:54 np0005603787 podman[77086]: 2026-01-31 09:57:54.800019234 +0000 UTC m=+0.039312553 container create d08ab218780a359fd1d3e54886ad577ab0e3462ec8dbd655accde8ac950a9e25 (image=quay.io/ceph/ceph:v20, name=recursing_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:57:54 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:54 np0005603787 ceph-mon[75160]: Added host compute-0
Jan 31 04:57:54 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:54 np0005603787 systemd[1]: Started libpod-conmon-d08ab218780a359fd1d3e54886ad577ab0e3462ec8dbd655accde8ac950a9e25.scope.
Jan 31 04:57:54 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:57:54 np0005603787 podman[77086]: 2026-01-31 09:57:54.864319005 +0000 UTC m=+0.103612324 container init d08ab218780a359fd1d3e54886ad577ab0e3462ec8dbd655accde8ac950a9e25 (image=quay.io/ceph/ceph:v20, name=recursing_bell, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 04:57:54 np0005603787 podman[77086]: 2026-01-31 09:57:54.867548321 +0000 UTC m=+0.106841640 container start d08ab218780a359fd1d3e54886ad577ab0e3462ec8dbd655accde8ac950a9e25 (image=quay.io/ceph/ceph:v20, name=recursing_bell, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:57:54 np0005603787 podman[77086]: 2026-01-31 09:57:54.780776869 +0000 UTC m=+0.020070248 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:57:54 np0005603787 podman[77086]: 2026-01-31 09:57:54.885843151 +0000 UTC m=+0.125136500 container attach d08ab218780a359fd1d3e54886ad577ab0e3462ec8dbd655accde8ac950a9e25 (image=quay.io/ceph/ceph:v20, name=recursing_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 04:57:54 np0005603787 recursing_bell[77103]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Jan 31 04:57:54 np0005603787 systemd[1]: libpod-d08ab218780a359fd1d3e54886ad577ab0e3462ec8dbd655accde8ac950a9e25.scope: Deactivated successfully.
Jan 31 04:57:54 np0005603787 podman[77086]: 2026-01-31 09:57:54.946243756 +0000 UTC m=+0.185537075 container died d08ab218780a359fd1d3e54886ad577ab0e3462ec8dbd655accde8ac950a9e25 (image=quay.io/ceph/ceph:v20, name=recursing_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:57:54 np0005603787 systemd[1]: var-lib-containers-storage-overlay-a5330a9ac2d46aa50c925f7c3810630a65213765e008466e60f022d18c062184-merged.mount: Deactivated successfully.
Jan 31 04:57:54 np0005603787 podman[77086]: 2026-01-31 09:57:54.981924072 +0000 UTC m=+0.221217391 container remove d08ab218780a359fd1d3e54886ad577ab0e3462ec8dbd655accde8ac950a9e25 (image=quay.io/ceph/ceph:v20, name=recursing_bell, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:57:55 np0005603787 systemd[1]: libpod-conmon-d08ab218780a359fd1d3e54886ad577ab0e3462ec8dbd655accde8ac950a9e25.scope: Deactivated successfully.
Jan 31 04:57:55 np0005603787 ceph-mgr[75453]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 04:57:55 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Jan 31 04:57:55 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:55 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:57:55 np0005603787 ceph-mgr[75453]: [cephadm INFO root] Saving service mgr spec with placement count:2
Jan 31 04:57:55 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Jan 31 04:57:55 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 31 04:57:55 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:55 np0005603787 boring_varahamihira[77049]: Scheduled mgr update...
Jan 31 04:57:55 np0005603787 systemd[1]: libpod-b4d8eb64dc25998726b6439fb4ced1d0bed25fdf67f520dca185bd33a127ea13.scope: Deactivated successfully.
Jan 31 04:57:55 np0005603787 podman[77032]: 2026-01-31 09:57:55.13623204 +0000 UTC m=+0.628322593 container died b4d8eb64dc25998726b6439fb4ced1d0bed25fdf67f520dca185bd33a127ea13 (image=quay.io/ceph/ceph:v20, name=boring_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 04:57:55 np0005603787 systemd[1]: var-lib-containers-storage-overlay-54d2a361d5f56bed0bc36ecac72d83e8796d848aafad50731de67fc2dcb825ef-merged.mount: Deactivated successfully.
Jan 31 04:57:55 np0005603787 podman[77032]: 2026-01-31 09:57:55.19341019 +0000 UTC m=+0.685500743 container remove b4d8eb64dc25998726b6439fb4ced1d0bed25fdf67f520dca185bd33a127ea13 (image=quay.io/ceph/ceph:v20, name=boring_varahamihira, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 04:57:55 np0005603787 systemd[1]: libpod-conmon-b4d8eb64dc25998726b6439fb4ced1d0bed25fdf67f520dca185bd33a127ea13.scope: Deactivated successfully.
Jan 31 04:57:55 np0005603787 podman[77184]: 2026-01-31 09:57:55.243264423 +0000 UTC m=+0.036438736 container create 600878b1780421ebf02409437b8f8b740b365984e6ab92ef09775bfe508cb029 (image=quay.io/ceph/ceph:v20, name=jolly_mirzakhani, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:57:55 np0005603787 systemd[1]: Started libpod-conmon-600878b1780421ebf02409437b8f8b740b365984e6ab92ef09775bfe508cb029.scope.
Jan 31 04:57:55 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:57:55 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1864bb062f0dea2226133cb91b92d3316cbe42622daf332d1a85c81f67dfeee7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:55 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1864bb062f0dea2226133cb91b92d3316cbe42622daf332d1a85c81f67dfeee7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:55 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1864bb062f0dea2226133cb91b92d3316cbe42622daf332d1a85c81f67dfeee7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:55 np0005603787 podman[77184]: 2026-01-31 09:57:55.317886621 +0000 UTC m=+0.111060934 container init 600878b1780421ebf02409437b8f8b740b365984e6ab92ef09775bfe508cb029 (image=quay.io/ceph/ceph:v20, name=jolly_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:57:55 np0005603787 podman[77184]: 2026-01-31 09:57:55.225119798 +0000 UTC m=+0.018294141 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:57:55 np0005603787 podman[77184]: 2026-01-31 09:57:55.32313111 +0000 UTC m=+0.116305423 container start 600878b1780421ebf02409437b8f8b740b365984e6ab92ef09775bfe508cb029 (image=quay.io/ceph/ceph:v20, name=jolly_mirzakhani, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 04:57:55 np0005603787 podman[77184]: 2026-01-31 09:57:55.326871571 +0000 UTC m=+0.120045904 container attach 600878b1780421ebf02409437b8f8b740b365984e6ab92ef09775bfe508cb029 (image=quay.io/ceph/ceph:v20, name=jolly_mirzakhani, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:57:55 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 04:57:55 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:55 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:57:55 np0005603787 ceph-mgr[75453]: [cephadm INFO root] Saving service crash spec with placement *
Jan 31 04:57:55 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Jan 31 04:57:55 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 31 04:57:55 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:55 np0005603787 jolly_mirzakhani[77200]: Scheduled crash update...
Jan 31 04:57:55 np0005603787 ceph-mgr[75453]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 04:57:55 np0005603787 ceph-mon[75160]: Saving service mon spec with placement count:5
Jan 31 04:57:55 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:55 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:55 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:55 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:55 np0005603787 systemd[1]: libpod-600878b1780421ebf02409437b8f8b740b365984e6ab92ef09775bfe508cb029.scope: Deactivated successfully.
Jan 31 04:57:55 np0005603787 podman[77184]: 2026-01-31 09:57:55.817263351 +0000 UTC m=+0.610437664 container died 600878b1780421ebf02409437b8f8b740b365984e6ab92ef09775bfe508cb029 (image=quay.io/ceph/ceph:v20, name=jolly_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:57:55 np0005603787 systemd[1]: var-lib-containers-storage-overlay-1864bb062f0dea2226133cb91b92d3316cbe42622daf332d1a85c81f67dfeee7-merged.mount: Deactivated successfully.
Jan 31 04:57:55 np0005603787 podman[77184]: 2026-01-31 09:57:55.856542742 +0000 UTC m=+0.649717065 container remove 600878b1780421ebf02409437b8f8b740b365984e6ab92ef09775bfe508cb029 (image=quay.io/ceph/ceph:v20, name=jolly_mirzakhani, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:57:55 np0005603787 systemd[1]: libpod-conmon-600878b1780421ebf02409437b8f8b740b365984e6ab92ef09775bfe508cb029.scope: Deactivated successfully.
Jan 31 04:57:55 np0005603787 podman[77324]: 2026-01-31 09:57:55.918911041 +0000 UTC m=+0.044002718 container create 383afb5b9c04ea9e27de2e3c86f7b1abbf993ac422ed981830de3daf3e3aa8b6 (image=quay.io/ceph/ceph:v20, name=sharp_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:57:55 np0005603787 systemd[1]: Started libpod-conmon-383afb5b9c04ea9e27de2e3c86f7b1abbf993ac422ed981830de3daf3e3aa8b6.scope.
Jan 31 04:57:55 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:57:55 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d496ffcc237aef3b3b35c93304a0737e1aba194b51b9bb525183ad622f4e7ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:55 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d496ffcc237aef3b3b35c93304a0737e1aba194b51b9bb525183ad622f4e7ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:55 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d496ffcc237aef3b3b35c93304a0737e1aba194b51b9bb525183ad622f4e7ec/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:55 np0005603787 podman[77324]: 2026-01-31 09:57:55.90092234 +0000 UTC m=+0.026014037 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:57:55 np0005603787 podman[77324]: 2026-01-31 09:57:55.99883823 +0000 UTC m=+0.123929907 container init 383afb5b9c04ea9e27de2e3c86f7b1abbf993ac422ed981830de3daf3e3aa8b6 (image=quay.io/ceph/ceph:v20, name=sharp_montalcini, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:57:56 np0005603787 podman[77324]: 2026-01-31 09:57:56.003952247 +0000 UTC m=+0.129043924 container start 383afb5b9c04ea9e27de2e3c86f7b1abbf993ac422ed981830de3daf3e3aa8b6 (image=quay.io/ceph/ceph:v20, name=sharp_montalcini, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:57:56 np0005603787 podman[77324]: 2026-01-31 09:57:56.007702107 +0000 UTC m=+0.132793804 container attach 383afb5b9c04ea9e27de2e3c86f7b1abbf993ac422ed981830de3daf3e3aa8b6 (image=quay.io/ceph/ceph:v20, name=sharp_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:57:56 np0005603787 podman[77372]: 2026-01-31 09:57:56.056923754 +0000 UTC m=+0.056439511 container exec 1cb6a2ad0c52f65a03512fc45c5f9abf84541c639633c47899a99e7122aa7891 (image=quay.io/ceph/ceph:v20, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 04:57:56 np0005603787 podman[77372]: 2026-01-31 09:57:56.143513231 +0000 UTC m=+0.143028998 container exec_died 1cb6a2ad0c52f65a03512fc45c5f9abf84541c639633c47899a99e7122aa7891 (image=quay.io/ceph/ceph:v20, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 04:57:56 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054701 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 04:57:56 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 04:57:56 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:56 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Jan 31 04:57:56 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1394919520' entity='client.admin' 
Jan 31 04:57:56 np0005603787 systemd[1]: libpod-383afb5b9c04ea9e27de2e3c86f7b1abbf993ac422ed981830de3daf3e3aa8b6.scope: Deactivated successfully.
Jan 31 04:57:56 np0005603787 podman[77324]: 2026-01-31 09:57:56.410442573 +0000 UTC m=+0.535534250 container died 383afb5b9c04ea9e27de2e3c86f7b1abbf993ac422ed981830de3daf3e3aa8b6 (image=quay.io/ceph/ceph:v20, name=sharp_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:57:56 np0005603787 systemd[1]: var-lib-containers-storage-overlay-7d496ffcc237aef3b3b35c93304a0737e1aba194b51b9bb525183ad622f4e7ec-merged.mount: Deactivated successfully.
Jan 31 04:57:56 np0005603787 podman[77324]: 2026-01-31 09:57:56.535773516 +0000 UTC m=+0.660865203 container remove 383afb5b9c04ea9e27de2e3c86f7b1abbf993ac422ed981830de3daf3e3aa8b6 (image=quay.io/ceph/ceph:v20, name=sharp_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Jan 31 04:57:56 np0005603787 systemd[1]: libpod-conmon-383afb5b9c04ea9e27de2e3c86f7b1abbf993ac422ed981830de3daf3e3aa8b6.scope: Deactivated successfully.
Jan 31 04:57:56 np0005603787 podman[77535]: 2026-01-31 09:57:56.596242825 +0000 UTC m=+0.042454698 container create 2f22cf8e2c80fef6b6c7055a5bead2c99f3fcb1de3f75e0872168517bfbbb5fd (image=quay.io/ceph/ceph:v20, name=nostalgic_tharp, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:57:56 np0005603787 systemd[1]: Started libpod-conmon-2f22cf8e2c80fef6b6c7055a5bead2c99f3fcb1de3f75e0872168517bfbbb5fd.scope.
Jan 31 04:57:56 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:57:56 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f22c80767cff424198b4c7a7dcfb52a48cbe9193c440121ce8552435f6296c93/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:56 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f22c80767cff424198b4c7a7dcfb52a48cbe9193c440121ce8552435f6296c93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:56 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f22c80767cff424198b4c7a7dcfb52a48cbe9193c440121ce8552435f6296c93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:56 np0005603787 podman[77535]: 2026-01-31 09:57:56.577815441 +0000 UTC m=+0.024027324 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:57:56 np0005603787 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77566 (sysctl)
Jan 31 04:57:56 np0005603787 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 31 04:57:56 np0005603787 podman[77535]: 2026-01-31 09:57:56.678068804 +0000 UTC m=+0.124280697 container init 2f22cf8e2c80fef6b6c7055a5bead2c99f3fcb1de3f75e0872168517bfbbb5fd (image=quay.io/ceph/ceph:v20, name=nostalgic_tharp, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:57:56 np0005603787 podman[77535]: 2026-01-31 09:57:56.68465304 +0000 UTC m=+0.130864903 container start 2f22cf8e2c80fef6b6c7055a5bead2c99f3fcb1de3f75e0872168517bfbbb5fd (image=quay.io/ceph/ceph:v20, name=nostalgic_tharp, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:57:56 np0005603787 podman[77535]: 2026-01-31 09:57:56.688597086 +0000 UTC m=+0.134808969 container attach 2f22cf8e2c80fef6b6c7055a5bead2c99f3fcb1de3f75e0872168517bfbbb5fd (image=quay.io/ceph/ceph:v20, name=nostalgic_tharp, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 04:57:56 np0005603787 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 31 04:57:56 np0005603787 ceph-mon[75160]: Saving service mgr spec with placement count:2
Jan 31 04:57:56 np0005603787 ceph-mon[75160]: Saving service crash spec with placement *
Jan 31 04:57:56 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:56 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/1394919520' entity='client.admin' 
Jan 31 04:57:57 np0005603787 ceph-mgr[75453]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 04:57:57 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:57:57 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Jan 31 04:57:57 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:57 np0005603787 systemd[1]: libpod-2f22cf8e2c80fef6b6c7055a5bead2c99f3fcb1de3f75e0872168517bfbbb5fd.scope: Deactivated successfully.
Jan 31 04:57:57 np0005603787 podman[77535]: 2026-01-31 09:57:57.138536184 +0000 UTC m=+0.584748047 container died 2f22cf8e2c80fef6b6c7055a5bead2c99f3fcb1de3f75e0872168517bfbbb5fd (image=quay.io/ceph/ceph:v20, name=nostalgic_tharp, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:57:57 np0005603787 systemd[1]: var-lib-containers-storage-overlay-f22c80767cff424198b4c7a7dcfb52a48cbe9193c440121ce8552435f6296c93-merged.mount: Deactivated successfully.
Jan 31 04:57:57 np0005603787 podman[77535]: 2026-01-31 09:57:57.184777051 +0000 UTC m=+0.630988914 container remove 2f22cf8e2c80fef6b6c7055a5bead2c99f3fcb1de3f75e0872168517bfbbb5fd (image=quay.io/ceph/ceph:v20, name=nostalgic_tharp, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 04:57:57 np0005603787 systemd[1]: libpod-conmon-2f22cf8e2c80fef6b6c7055a5bead2c99f3fcb1de3f75e0872168517bfbbb5fd.scope: Deactivated successfully.
Jan 31 04:57:57 np0005603787 podman[77672]: 2026-01-31 09:57:57.244335565 +0000 UTC m=+0.042280973 container create 5c86ecf8144150e37ea0bc2dc3eadd8e1334b302fcab3d273316447df3eede3f (image=quay.io/ceph/ceph:v20, name=cranky_hertz, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:57:57 np0005603787 systemd[1]: Started libpod-conmon-5c86ecf8144150e37ea0bc2dc3eadd8e1334b302fcab3d273316447df3eede3f.scope.
Jan 31 04:57:57 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:57:57 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6582a36c776df6008b15a903f2171ee34c65f50284862e0bb42bc8f51f18c16a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:57 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6582a36c776df6008b15a903f2171ee34c65f50284862e0bb42bc8f51f18c16a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:57 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6582a36c776df6008b15a903f2171ee34c65f50284862e0bb42bc8f51f18c16a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:57 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 04:57:57 np0005603787 podman[77672]: 2026-01-31 09:57:57.32149196 +0000 UTC m=+0.119437388 container init 5c86ecf8144150e37ea0bc2dc3eadd8e1334b302fcab3d273316447df3eede3f (image=quay.io/ceph/ceph:v20, name=cranky_hertz, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:57:57 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:57 np0005603787 podman[77672]: 2026-01-31 09:57:57.226022164 +0000 UTC m=+0.023967572 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:57:57 np0005603787 podman[77672]: 2026-01-31 09:57:57.326964436 +0000 UTC m=+0.124909834 container start 5c86ecf8144150e37ea0bc2dc3eadd8e1334b302fcab3d273316447df3eede3f (image=quay.io/ceph/ceph:v20, name=cranky_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 04:57:57 np0005603787 podman[77672]: 2026-01-31 09:57:57.331194299 +0000 UTC m=+0.129139717 container attach 5c86ecf8144150e37ea0bc2dc3eadd8e1334b302fcab3d273316447df3eede3f (image=quay.io/ceph/ceph:v20, name=cranky_hertz, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:57:57 np0005603787 podman[77792]: 2026-01-31 09:57:57.625895114 +0000 UTC m=+0.034377331 container create 53f714ab84c83e1caaccbbbc6d195039809bef8c6c62d98a47a140775515cc7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 04:57:57 np0005603787 systemd[1]: Started libpod-conmon-53f714ab84c83e1caaccbbbc6d195039809bef8c6c62d98a47a140775515cc7d.scope.
Jan 31 04:57:57 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:57:57 np0005603787 podman[77792]: 2026-01-31 09:57:57.689496185 +0000 UTC m=+0.097978432 container init 53f714ab84c83e1caaccbbbc6d195039809bef8c6c62d98a47a140775515cc7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 04:57:57 np0005603787 podman[77792]: 2026-01-31 09:57:57.69342318 +0000 UTC m=+0.101905397 container start 53f714ab84c83e1caaccbbbc6d195039809bef8c6c62d98a47a140775515cc7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_rubin, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:57:57 np0005603787 podman[77792]: 2026-01-31 09:57:57.696122623 +0000 UTC m=+0.104604870 container attach 53f714ab84c83e1caaccbbbc6d195039809bef8c6c62d98a47a140775515cc7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_rubin, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:57:57 np0005603787 cranky_rubin[77808]: 167 167
Jan 31 04:57:57 np0005603787 systemd[1]: libpod-53f714ab84c83e1caaccbbbc6d195039809bef8c6c62d98a47a140775515cc7d.scope: Deactivated successfully.
Jan 31 04:57:57 np0005603787 podman[77792]: 2026-01-31 09:57:57.697698715 +0000 UTC m=+0.106180932 container died 53f714ab84c83e1caaccbbbc6d195039809bef8c6c62d98a47a140775515cc7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_rubin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:57:57 np0005603787 podman[77792]: 2026-01-31 09:57:57.610585144 +0000 UTC m=+0.019067391 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:57:57 np0005603787 systemd[1]: var-lib-containers-storage-overlay-efd60cc8a831f69190b5fc8426d7b36970dd25ae1c9d3becdb77951b9953496f-merged.mount: Deactivated successfully.
Jan 31 04:57:57 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:57:57 np0005603787 podman[77792]: 2026-01-31 09:57:57.731959532 +0000 UTC m=+0.140441749 container remove 53f714ab84c83e1caaccbbbc6d195039809bef8c6c62d98a47a140775515cc7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:57:57 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 31 04:57:57 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:57 np0005603787 ceph-mgr[75453]: [cephadm INFO root] Added label _admin to host compute-0
Jan 31 04:57:57 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Jan 31 04:57:57 np0005603787 cranky_hertz[77706]: Added label _admin to host compute-0
Jan 31 04:57:57 np0005603787 systemd[1]: libpod-conmon-53f714ab84c83e1caaccbbbc6d195039809bef8c6c62d98a47a140775515cc7d.scope: Deactivated successfully.
Jan 31 04:57:57 np0005603787 systemd[1]: libpod-5c86ecf8144150e37ea0bc2dc3eadd8e1334b302fcab3d273316447df3eede3f.scope: Deactivated successfully.
Jan 31 04:57:57 np0005603787 podman[77827]: 2026-01-31 09:57:57.784438946 +0000 UTC m=+0.021150317 container died 5c86ecf8144150e37ea0bc2dc3eadd8e1334b302fcab3d273316447df3eede3f (image=quay.io/ceph/ceph:v20, name=cranky_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 04:57:57 np0005603787 systemd[1]: var-lib-containers-storage-overlay-6582a36c776df6008b15a903f2171ee34c65f50284862e0bb42bc8f51f18c16a-merged.mount: Deactivated successfully.
Jan 31 04:57:57 np0005603787 ceph-mgr[75453]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 04:57:57 np0005603787 podman[77827]: 2026-01-31 09:57:57.818527828 +0000 UTC m=+0.055239149 container remove 5c86ecf8144150e37ea0bc2dc3eadd8e1334b302fcab3d273316447df3eede3f (image=quay.io/ceph/ceph:v20, name=cranky_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:57:57 np0005603787 systemd[1]: libpod-conmon-5c86ecf8144150e37ea0bc2dc3eadd8e1334b302fcab3d273316447df3eede3f.scope: Deactivated successfully.
Jan 31 04:57:57 np0005603787 podman[77841]: 2026-01-31 09:57:57.872579824 +0000 UTC m=+0.036445646 container create b156f3d527a9920f503c60e984290fa06c80182e3cf725d33fba8743f0f63286 (image=quay.io/ceph/ceph:v20, name=charming_booth, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 04:57:57 np0005603787 systemd[1]: Started libpod-conmon-b156f3d527a9920f503c60e984290fa06c80182e3cf725d33fba8743f0f63286.scope.
Jan 31 04:57:57 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:57:57 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4a2079ff27d963b072421b2e4bb637a6af3115845e8f3cc2bb4e9610af7e9b3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:57 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4a2079ff27d963b072421b2e4bb637a6af3115845e8f3cc2bb4e9610af7e9b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:57 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4a2079ff27d963b072421b2e4bb637a6af3115845e8f3cc2bb4e9610af7e9b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:57 np0005603787 podman[77841]: 2026-01-31 09:57:57.947358355 +0000 UTC m=+0.111224247 container init b156f3d527a9920f503c60e984290fa06c80182e3cf725d33fba8743f0f63286 (image=quay.io/ceph/ceph:v20, name=charming_booth, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:57:57 np0005603787 podman[77841]: 2026-01-31 09:57:57.951941158 +0000 UTC m=+0.115807000 container start b156f3d527a9920f503c60e984290fa06c80182e3cf725d33fba8743f0f63286 (image=quay.io/ceph/ceph:v20, name=charming_booth, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 04:57:57 np0005603787 podman[77841]: 2026-01-31 09:57:57.857610934 +0000 UTC m=+0.021476766 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:57:57 np0005603787 podman[77841]: 2026-01-31 09:57:57.957428645 +0000 UTC m=+0.121294497 container attach b156f3d527a9920f503c60e984290fa06c80182e3cf725d33fba8743f0f63286 (image=quay.io/ceph/ceph:v20, name=charming_booth, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 04:57:58 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:58 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:58 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:58 np0005603787 ceph-mon[75160]: Added label _admin to host compute-0
Jan 31 04:57:58 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Jan 31 04:57:58 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1614508688' entity='client.admin' 
Jan 31 04:57:58 np0005603787 charming_booth[77858]: set mgr/dashboard/cluster/status
Jan 31 04:57:58 np0005603787 systemd[1]: libpod-b156f3d527a9920f503c60e984290fa06c80182e3cf725d33fba8743f0f63286.scope: Deactivated successfully.
Jan 31 04:57:58 np0005603787 podman[77841]: 2026-01-31 09:57:58.496864718 +0000 UTC m=+0.660730550 container died b156f3d527a9920f503c60e984290fa06c80182e3cf725d33fba8743f0f63286 (image=quay.io/ceph/ceph:v20, name=charming_booth, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:57:58 np0005603787 systemd[1]: var-lib-containers-storage-overlay-e4a2079ff27d963b072421b2e4bb637a6af3115845e8f3cc2bb4e9610af7e9b3-merged.mount: Deactivated successfully.
Jan 31 04:57:58 np0005603787 podman[77841]: 2026-01-31 09:57:58.535975234 +0000 UTC m=+0.699841036 container remove b156f3d527a9920f503c60e984290fa06c80182e3cf725d33fba8743f0f63286 (image=quay.io/ceph/ceph:v20, name=charming_booth, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 04:57:58 np0005603787 systemd[1]: libpod-conmon-b156f3d527a9920f503c60e984290fa06c80182e3cf725d33fba8743f0f63286.scope: Deactivated successfully.
Jan 31 04:57:58 np0005603787 systemd[1]: Reloading.
Jan 31 04:57:58 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:57:58 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 04:57:58 np0005603787 podman[77942]: 2026-01-31 09:57:58.898475453 +0000 UTC m=+0.035454989 container create 4357682e0c815dee64673ab29b07685356250ce12fb2c0b2c52003ea9bbd4dde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:57:58 np0005603787 systemd[1]: Started libpod-conmon-4357682e0c815dee64673ab29b07685356250ce12fb2c0b2c52003ea9bbd4dde.scope.
Jan 31 04:57:58 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:57:58 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d19206a283337f0535ad9e1d8623a8c7b303b41873e939b7b53986e8574a488/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:58 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d19206a283337f0535ad9e1d8623a8c7b303b41873e939b7b53986e8574a488/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:58 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d19206a283337f0535ad9e1d8623a8c7b303b41873e939b7b53986e8574a488/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:58 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d19206a283337f0535ad9e1d8623a8c7b303b41873e939b7b53986e8574a488/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:58 np0005603787 podman[77942]: 2026-01-31 09:57:58.968502567 +0000 UTC m=+0.105482103 container init 4357682e0c815dee64673ab29b07685356250ce12fb2c0b2c52003ea9bbd4dde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_perlman, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 04:57:58 np0005603787 podman[77942]: 2026-01-31 09:57:58.974584609 +0000 UTC m=+0.111564145 container start 4357682e0c815dee64673ab29b07685356250ce12fb2c0b2c52003ea9bbd4dde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_perlman, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 04:57:58 np0005603787 podman[77942]: 2026-01-31 09:57:58.978303839 +0000 UTC m=+0.115283385 container attach 4357682e0c815dee64673ab29b07685356250ce12fb2c0b2c52003ea9bbd4dde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_perlman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:57:58 np0005603787 podman[77942]: 2026-01-31 09:57:58.883281037 +0000 UTC m=+0.020260593 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:57:59 np0005603787 ceph-mgr[75453]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 04:57:59 np0005603787 python3[77988]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:57:59 np0005603787 podman[77994]: 2026-01-31 09:57:59.27025367 +0000 UTC m=+0.040922496 container create 8d0ad48ff8bdf316bf3a6141bb1b14078091cafd074a9209d11a88616849e2ac (image=quay.io/ceph/ceph:v20, name=sleepy_cori, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 04:57:59 np0005603787 systemd[1]: Started libpod-conmon-8d0ad48ff8bdf316bf3a6141bb1b14078091cafd074a9209d11a88616849e2ac.scope.
Jan 31 04:57:59 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:57:59 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eed80b9b50eba66ebca06bfa7c28696d0da8e4b7553eb60629f3b837be5e5294/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:59 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eed80b9b50eba66ebca06bfa7c28696d0da8e4b7553eb60629f3b837be5e5294/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:57:59 np0005603787 podman[77994]: 2026-01-31 09:57:59.342594626 +0000 UTC m=+0.113263472 container init 8d0ad48ff8bdf316bf3a6141bb1b14078091cafd074a9209d11a88616849e2ac (image=quay.io/ceph/ceph:v20, name=sleepy_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle)
Jan 31 04:57:59 np0005603787 podman[77994]: 2026-01-31 09:57:59.252448104 +0000 UTC m=+0.023116990 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:57:59 np0005603787 podman[77994]: 2026-01-31 09:57:59.348189106 +0000 UTC m=+0.118857932 container start 8d0ad48ff8bdf316bf3a6141bb1b14078091cafd074a9209d11a88616849e2ac (image=quay.io/ceph/ceph:v20, name=sleepy_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:57:59 np0005603787 podman[77994]: 2026-01-31 09:57:59.352533112 +0000 UTC m=+0.123201958 container attach 8d0ad48ff8bdf316bf3a6141bb1b14078091cafd074a9209d11a88616849e2ac (image=quay.io/ceph/ceph:v20, name=sleepy_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 04:57:59 np0005603787 recursing_perlman[77958]: [
Jan 31 04:57:59 np0005603787 recursing_perlman[77958]:    {
Jan 31 04:57:59 np0005603787 recursing_perlman[77958]:        "available": false,
Jan 31 04:57:59 np0005603787 recursing_perlman[77958]:        "being_replaced": false,
Jan 31 04:57:59 np0005603787 recursing_perlman[77958]:        "ceph_device_lvm": false,
Jan 31 04:57:59 np0005603787 recursing_perlman[77958]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 04:57:59 np0005603787 recursing_perlman[77958]:        "lsm_data": {},
Jan 31 04:57:59 np0005603787 recursing_perlman[77958]:        "lvs": [],
Jan 31 04:57:59 np0005603787 recursing_perlman[77958]:        "path": "/dev/sr0",
Jan 31 04:57:59 np0005603787 recursing_perlman[77958]:        "rejected_reasons": [
Jan 31 04:57:59 np0005603787 recursing_perlman[77958]:            "Has a FileSystem",
Jan 31 04:57:59 np0005603787 recursing_perlman[77958]:            "Insufficient space (<5GB)"
Jan 31 04:57:59 np0005603787 recursing_perlman[77958]:        ],
Jan 31 04:57:59 np0005603787 recursing_perlman[77958]:        "sys_api": {
Jan 31 04:57:59 np0005603787 recursing_perlman[77958]:            "actuators": null,
Jan 31 04:57:59 np0005603787 recursing_perlman[77958]:            "device_nodes": [
Jan 31 04:57:59 np0005603787 recursing_perlman[77958]:                "sr0"
Jan 31 04:57:59 np0005603787 recursing_perlman[77958]:            ],
Jan 31 04:57:59 np0005603787 recursing_perlman[77958]:            "devname": "sr0",
Jan 31 04:57:59 np0005603787 recursing_perlman[77958]:            "human_readable_size": "482.00 KB",
Jan 31 04:57:59 np0005603787 recursing_perlman[77958]:            "id_bus": "ata",
Jan 31 04:57:59 np0005603787 recursing_perlman[77958]:            "model": "QEMU DVD-ROM",
Jan 31 04:57:59 np0005603787 recursing_perlman[77958]:            "nr_requests": "2",
Jan 31 04:57:59 np0005603787 recursing_perlman[77958]:            "parent": "/dev/sr0",
Jan 31 04:57:59 np0005603787 recursing_perlman[77958]:            "partitions": {},
Jan 31 04:57:59 np0005603787 recursing_perlman[77958]:            "path": "/dev/sr0",
Jan 31 04:57:59 np0005603787 recursing_perlman[77958]:            "removable": "1",
Jan 31 04:57:59 np0005603787 recursing_perlman[77958]:            "rev": "2.5+",
Jan 31 04:57:59 np0005603787 recursing_perlman[77958]:            "ro": "0",
Jan 31 04:57:59 np0005603787 recursing_perlman[77958]:            "rotational": "1",
Jan 31 04:57:59 np0005603787 recursing_perlman[77958]:            "sas_address": "",
Jan 31 04:57:59 np0005603787 recursing_perlman[77958]:            "sas_device_handle": "",
Jan 31 04:57:59 np0005603787 recursing_perlman[77958]:            "scheduler_mode": "mq-deadline",
Jan 31 04:57:59 np0005603787 recursing_perlman[77958]:            "sectors": 0,
Jan 31 04:57:59 np0005603787 recursing_perlman[77958]:            "sectorsize": "2048",
Jan 31 04:57:59 np0005603787 recursing_perlman[77958]:            "size": 493568.0,
Jan 31 04:57:59 np0005603787 recursing_perlman[77958]:            "support_discard": "2048",
Jan 31 04:57:59 np0005603787 recursing_perlman[77958]:            "type": "disk",
Jan 31 04:57:59 np0005603787 recursing_perlman[77958]:            "vendor": "QEMU"
Jan 31 04:57:59 np0005603787 recursing_perlman[77958]:        }
Jan 31 04:57:59 np0005603787 recursing_perlman[77958]:    }
Jan 31 04:57:59 np0005603787 recursing_perlman[77958]: ]
Jan 31 04:57:59 np0005603787 systemd[1]: libpod-4357682e0c815dee64673ab29b07685356250ce12fb2c0b2c52003ea9bbd4dde.scope: Deactivated successfully.
Jan 31 04:57:59 np0005603787 conmon[77958]: conmon 4357682e0c815dee6467 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4357682e0c815dee64673ab29b07685356250ce12fb2c0b2c52003ea9bbd4dde.scope/container/memory.events
Jan 31 04:57:59 np0005603787 podman[77942]: 2026-01-31 09:57:59.429629372 +0000 UTC m=+0.566608918 container died 4357682e0c815dee64673ab29b07685356250ce12fb2c0b2c52003ea9bbd4dde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_perlman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:57:59 np0005603787 systemd[1]: var-lib-containers-storage-overlay-6d19206a283337f0535ad9e1d8623a8c7b303b41873e939b7b53986e8574a488-merged.mount: Deactivated successfully.
Jan 31 04:57:59 np0005603787 podman[77942]: 2026-01-31 09:57:59.48735386 +0000 UTC m=+0.624333396 container remove 4357682e0c815dee64673ab29b07685356250ce12fb2c0b2c52003ea9bbd4dde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_perlman, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:57:59 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/1614508688' entity='client.admin' 
Jan 31 04:57:59 np0005603787 systemd[1]: libpod-conmon-4357682e0c815dee64673ab29b07685356250ce12fb2c0b2c52003ea9bbd4dde.scope: Deactivated successfully.
Jan 31 04:57:59 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 04:57:59 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:59 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 04:57:59 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:59 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 04:57:59 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:59 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 04:57:59 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:57:59 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 31 04:57:59 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 04:57:59 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 04:57:59 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 04:57:59 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 04:57:59 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 04:57:59 np0005603787 ceph-mgr[75453]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 31 04:57:59 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 31 04:57:59 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Jan 31 04:57:59 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4036949405' entity='client.admin' 
Jan 31 04:57:59 np0005603787 systemd[1]: libpod-8d0ad48ff8bdf316bf3a6141bb1b14078091cafd074a9209d11a88616849e2ac.scope: Deactivated successfully.
Jan 31 04:57:59 np0005603787 conmon[78148]: conmon 8d0ad48ff8bdf316bf3a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8d0ad48ff8bdf316bf3a6141bb1b14078091cafd074a9209d11a88616849e2ac.scope/container/memory.events
Jan 31 04:57:59 np0005603787 podman[77994]: 2026-01-31 09:57:59.771535913 +0000 UTC m=+0.542204749 container died 8d0ad48ff8bdf316bf3a6141bb1b14078091cafd074a9209d11a88616849e2ac (image=quay.io/ceph/ceph:v20, name=sleepy_cori, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 04:57:59 np0005603787 systemd[1]: var-lib-containers-storage-overlay-eed80b9b50eba66ebca06bfa7c28696d0da8e4b7553eb60629f3b837be5e5294-merged.mount: Deactivated successfully.
Jan 31 04:57:59 np0005603787 podman[77994]: 2026-01-31 09:57:59.804819857 +0000 UTC m=+0.575488683 container remove 8d0ad48ff8bdf316bf3a6141bb1b14078091cafd074a9209d11a88616849e2ac (image=quay.io/ceph/ceph:v20, name=sleepy_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:57:59 np0005603787 ceph-mgr[75453]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 04:57:59 np0005603787 systemd[1]: libpod-conmon-8d0ad48ff8bdf316bf3a6141bb1b14078091cafd074a9209d11a88616849e2ac.scope: Deactivated successfully.
Jan 31 04:57:59 np0005603787 ceph-mgr[75453]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/962d77ae-dc67-5de8-89d8-3d1670c67b61/config/ceph.conf
Jan 31 04:57:59 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/962d77ae-dc67-5de8-89d8-3d1670c67b61/config/ceph.conf
Jan 31 04:58:00 np0005603787 ceph-mgr[75453]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 31 04:58:00 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 31 04:58:00 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:00 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:00 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:00 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:00 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 04:58:00 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 04:58:00 np0005603787 ceph-mon[75160]: Updating compute-0:/etc/ceph/ceph.conf
Jan 31 04:58:00 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/4036949405' entity='client.admin' 
Jan 31 04:58:00 np0005603787 ansible-async_wrapper.py[79312]: Invoked with j76046966029 30 /home/zuul/.ansible/tmp/ansible-tmp-1769853480.0934863-36535-41242175983212/AnsiballZ_command.py _
Jan 31 04:58:00 np0005603787 ansible-async_wrapper.py[79379]: Starting module and watcher
Jan 31 04:58:00 np0005603787 ansible-async_wrapper.py[79379]: Start watching 79381 (30)
Jan 31 04:58:00 np0005603787 ansible-async_wrapper.py[79381]: Start module (79381)
Jan 31 04:58:00 np0005603787 ansible-async_wrapper.py[79312]: Return async_wrapper task started.
Jan 31 04:58:00 np0005603787 python3[79387]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:58:00 np0005603787 podman[79468]: 2026-01-31 09:58:00.78484194 +0000 UTC m=+0.033822749 container create 921cf546fe69d3d8bca3ac0583a8591c47a6cc8e79232dbde3f7f3c45b2fe8cf (image=quay.io/ceph/ceph:v20, name=infallible_hopper, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:58:00 np0005603787 ceph-mgr[75453]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/962d77ae-dc67-5de8-89d8-3d1670c67b61/config/ceph.client.admin.keyring
Jan 31 04:58:00 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/962d77ae-dc67-5de8-89d8-3d1670c67b61/config/ceph.client.admin.keyring
Jan 31 04:58:00 np0005603787 systemd[1]: Started libpod-conmon-921cf546fe69d3d8bca3ac0583a8591c47a6cc8e79232dbde3f7f3c45b2fe8cf.scope.
Jan 31 04:58:00 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:00 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9add4f46834b17f3c59bf928734373d7893eaa8ace61dc557fef4a7436ba230a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:00 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9add4f46834b17f3c59bf928734373d7893eaa8ace61dc557fef4a7436ba230a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:00 np0005603787 podman[79468]: 2026-01-31 09:58:00.858573451 +0000 UTC m=+0.107554280 container init 921cf546fe69d3d8bca3ac0583a8591c47a6cc8e79232dbde3f7f3c45b2fe8cf (image=quay.io/ceph/ceph:v20, name=infallible_hopper, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 04:58:00 np0005603787 podman[79468]: 2026-01-31 09:58:00.864295477 +0000 UTC m=+0.113276286 container start 921cf546fe69d3d8bca3ac0583a8591c47a6cc8e79232dbde3f7f3c45b2fe8cf (image=quay.io/ceph/ceph:v20, name=infallible_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 04:58:00 np0005603787 podman[79468]: 2026-01-31 09:58:00.770550522 +0000 UTC m=+0.019531351 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:58:00 np0005603787 podman[79468]: 2026-01-31 09:58:00.868333936 +0000 UTC m=+0.117314775 container attach 921cf546fe69d3d8bca3ac0583a8591c47a6cc8e79232dbde3f7f3c45b2fe8cf (image=quay.io/ceph/ceph:v20, name=infallible_hopper, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:58:01 np0005603787 ceph-mgr[75453]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 04:58:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 04:58:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 04:58:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 04:58:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:01 np0005603787 ceph-mgr[75453]: [progress INFO root] update: starting ev b1c746a5-74a5-4ad5-82ec-218270a64474 (Updating crash deployment (+1 -> 1))
Jan 31 04:58:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 31 04:58:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Jan 31 04:58:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 31 04:58:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 04:58:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 04:58:01 np0005603787 ceph-mgr[75453]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Jan 31 04:58:01 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Jan 31 04:58:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 04:58:01 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 04:58:01 np0005603787 infallible_hopper[79531]: 
Jan 31 04:58:01 np0005603787 infallible_hopper[79531]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 31 04:58:01 np0005603787 systemd[1]: libpod-921cf546fe69d3d8bca3ac0583a8591c47a6cc8e79232dbde3f7f3c45b2fe8cf.scope: Deactivated successfully.
Jan 31 04:58:01 np0005603787 podman[79468]: 2026-01-31 09:58:01.319144704 +0000 UTC m=+0.568125523 container died 921cf546fe69d3d8bca3ac0583a8591c47a6cc8e79232dbde3f7f3c45b2fe8cf (image=quay.io/ceph/ceph:v20, name=infallible_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:58:01 np0005603787 systemd[1]: var-lib-containers-storage-overlay-9add4f46834b17f3c59bf928734373d7893eaa8ace61dc557fef4a7436ba230a-merged.mount: Deactivated successfully.
Jan 31 04:58:01 np0005603787 podman[79468]: 2026-01-31 09:58:01.353127446 +0000 UTC m=+0.602108255 container remove 921cf546fe69d3d8bca3ac0583a8591c47a6cc8e79232dbde3f7f3c45b2fe8cf (image=quay.io/ceph/ceph:v20, name=infallible_hopper, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:58:01 np0005603787 systemd[1]: libpod-conmon-921cf546fe69d3d8bca3ac0583a8591c47a6cc8e79232dbde3f7f3c45b2fe8cf.scope: Deactivated successfully.
Jan 31 04:58:01 np0005603787 ansible-async_wrapper.py[79381]: Module complete (79381)
Jan 31 04:58:01 np0005603787 ceph-mon[75160]: Updating compute-0:/var/lib/ceph/962d77ae-dc67-5de8-89d8-3d1670c67b61/config/ceph.conf
Jan 31 04:58:01 np0005603787 ceph-mon[75160]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 31 04:58:01 np0005603787 ceph-mon[75160]: Updating compute-0:/var/lib/ceph/962d77ae-dc67-5de8-89d8-3d1670c67b61/config/ceph.client.admin.keyring
Jan 31 04:58:01 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:01 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:01 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:01 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Jan 31 04:58:01 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 31 04:58:01 np0005603787 podman[79857]: 2026-01-31 09:58:01.666985495 +0000 UTC m=+0.084357570 container create 929a10404630214179bd5ac0b2702ca0f440d2bf5691b15faa07b5ea7866c2ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:58:01 np0005603787 podman[79857]: 2026-01-31 09:58:01.599928665 +0000 UTC m=+0.017300760 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:58:01 np0005603787 systemd[1]: Started libpod-conmon-929a10404630214179bd5ac0b2702ca0f440d2bf5691b15faa07b5ea7866c2ed.scope.
Jan 31 04:58:01 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:01 np0005603787 ceph-mgr[75453]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 04:58:01 np0005603787 podman[79857]: 2026-01-31 09:58:01.82738935 +0000 UTC m=+0.244761515 container init 929a10404630214179bd5ac0b2702ca0f440d2bf5691b15faa07b5ea7866c2ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_rosalind, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:58:01 np0005603787 podman[79857]: 2026-01-31 09:58:01.832001765 +0000 UTC m=+0.249373840 container start 929a10404630214179bd5ac0b2702ca0f440d2bf5691b15faa07b5ea7866c2ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 04:58:01 np0005603787 confident_rosalind[79913]: 167 167
Jan 31 04:58:01 np0005603787 systemd[1]: libpod-929a10404630214179bd5ac0b2702ca0f440d2bf5691b15faa07b5ea7866c2ed.scope: Deactivated successfully.
Jan 31 04:58:01 np0005603787 conmon[79913]: conmon 929a10404630214179bd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-929a10404630214179bd5ac0b2702ca0f440d2bf5691b15faa07b5ea7866c2ed.scope/container/memory.events
Jan 31 04:58:01 np0005603787 podman[79857]: 2026-01-31 09:58:01.840750742 +0000 UTC m=+0.258122837 container attach 929a10404630214179bd5ac0b2702ca0f440d2bf5691b15faa07b5ea7866c2ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_rosalind, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 04:58:01 np0005603787 podman[79857]: 2026-01-31 09:58:01.841286257 +0000 UTC m=+0.258658352 container died 929a10404630214179bd5ac0b2702ca0f440d2bf5691b15faa07b5ea7866c2ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 04:58:01 np0005603787 systemd[1]: var-lib-containers-storage-overlay-8621c40ecfbe6b980032f36fae498212338073e1f0c455edcfe70482ed423f77-merged.mount: Deactivated successfully.
Jan 31 04:58:01 np0005603787 podman[79857]: 2026-01-31 09:58:01.890361989 +0000 UTC m=+0.307734064 container remove 929a10404630214179bd5ac0b2702ca0f440d2bf5691b15faa07b5ea7866c2ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_rosalind, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:58:01 np0005603787 systemd[1]: libpod-conmon-929a10404630214179bd5ac0b2702ca0f440d2bf5691b15faa07b5ea7866c2ed.scope: Deactivated successfully.
Jan 31 04:58:01 np0005603787 systemd[1]: Reloading.
Jan 31 04:58:01 np0005603787 python3[79924]: ansible-ansible.legacy.async_status Invoked with jid=j76046966029.79312 mode=status _async_dir=/root/.ansible_async
Jan 31 04:58:02 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 04:58:02 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:58:02 np0005603787 systemd[1]: Reloading.
Jan 31 04:58:02 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 04:58:02 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:58:02 np0005603787 python3[80027]: ansible-ansible.legacy.async_status Invoked with jid=j76046966029.79312 mode=cleanup _async_dir=/root/.ansible_async
Jan 31 04:58:02 np0005603787 systemd[1]: Starting Ceph crash.compute-0 for 962d77ae-dc67-5de8-89d8-3d1670c67b61...
Jan 31 04:58:02 np0005603787 ceph-mon[75160]: Deploying daemon crash.compute-0 on compute-0
Jan 31 04:58:02 np0005603787 podman[80129]: 2026-01-31 09:58:02.591738088 +0000 UTC m=+0.043996846 container create ca20754edb2928244e66dc2ac1012e5009f09c0b3c14d521c971db3f1281c0ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-crash-compute-0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:58:02 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e19f302a378703c1d5ef017c4342c691190c744833cc05c2d7596896d051937d/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:02 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e19f302a378703c1d5ef017c4342c691190c744833cc05c2d7596896d051937d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:02 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e19f302a378703c1d5ef017c4342c691190c744833cc05c2d7596896d051937d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:02 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e19f302a378703c1d5ef017c4342c691190c744833cc05c2d7596896d051937d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:02 np0005603787 podman[80129]: 2026-01-31 09:58:02.660224177 +0000 UTC m=+0.112483005 container init ca20754edb2928244e66dc2ac1012e5009f09c0b3c14d521c971db3f1281c0ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-crash-compute-0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 04:58:02 np0005603787 podman[80129]: 2026-01-31 09:58:02.667456684 +0000 UTC m=+0.119715452 container start ca20754edb2928244e66dc2ac1012e5009f09c0b3c14d521c971db3f1281c0ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-crash-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 04:58:02 np0005603787 podman[80129]: 2026-01-31 09:58:02.573354709 +0000 UTC m=+0.025613467 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:58:02 np0005603787 bash[80129]: ca20754edb2928244e66dc2ac1012e5009f09c0b3c14d521c971db3f1281c0ac
Jan 31 04:58:02 np0005603787 systemd[1]: Started Ceph crash.compute-0 for 962d77ae-dc67-5de8-89d8-3d1670c67b61.
Jan 31 04:58:02 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-crash-compute-0[80157]: INFO:ceph-crash:pinging cluster to exercise our key
Jan 31 04:58:02 np0005603787 python3[80154]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 04:58:02 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 04:58:02 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:02 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 04:58:02 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:02 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 31 04:58:02 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:02 np0005603787 ceph-mgr[75453]: [progress INFO root] complete: finished ev b1c746a5-74a5-4ad5-82ec-218270a64474 (Updating crash deployment (+1 -> 1))
Jan 31 04:58:02 np0005603787 ceph-mgr[75453]: [progress INFO root] Completed event b1c746a5-74a5-4ad5-82ec-218270a64474 (Updating crash deployment (+1 -> 1)) in 2 seconds
Jan 31 04:58:02 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 31 04:58:02 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:02 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 31 04:58:02 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:02 np0005603787 ceph-mgr[75453]: [progress INFO root] update: starting ev 913a115c-5b98-4e23-aead-430463edd6d0 (Updating mgr deployment (+1 -> 2))
Jan 31 04:58:02 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.eqrvct", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 31 04:58:02 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.eqrvct", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 31 04:58:02 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.eqrvct", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 31 04:58:02 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 31 04:58:02 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "mgr services"} : dispatch
Jan 31 04:58:02 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 04:58:02 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 04:58:02 np0005603787 ceph-mgr[75453]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.eqrvct on compute-0
Jan 31 04:58:02 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.eqrvct on compute-0
Jan 31 04:58:02 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-crash-compute-0[80157]: 2026-01-31T09:58:02.817+0000 7fc3b1d2f640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 31 04:58:02 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-crash-compute-0[80157]: 2026-01-31T09:58:02.817+0000 7fc3b1d2f640 -1 AuthRegistry(0x7fc3ac053640) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 31 04:58:02 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-crash-compute-0[80157]: 2026-01-31T09:58:02.818+0000 7fc3b1d2f640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 31 04:58:02 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-crash-compute-0[80157]: 2026-01-31T09:58:02.818+0000 7fc3b1d2f640 -1 AuthRegistry(0x7fc3b1d2dfe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 31 04:58:02 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-crash-compute-0[80157]: 2026-01-31T09:58:02.819+0000 7fc3ab7fe640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 31 04:58:02 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-crash-compute-0[80157]: 2026-01-31T09:58:02.819+0000 7fc3b1d2f640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Jan 31 04:58:02 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-crash-compute-0[80157]: [errno 13] RADOS permission denied (error connecting to the cluster)
Jan 31 04:58:02 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-crash-compute-0[80157]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Jan 31 04:58:03 np0005603787 ceph-mgr[75453]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Jan 31 04:58:03 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 04:58:03 np0005603787 ceph-mon[75160]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 31 04:58:03 np0005603787 python3[80258]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:58:03 np0005603787 podman[80289]: 2026-01-31 09:58:03.202352153 +0000 UTC m=+0.039777281 container create f1c648ecea75ffe84dd5ac19f9b8df291a68ccec35e481340b9efbf4befe2afe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_allen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:58:03 np0005603787 podman[80303]: 2026-01-31 09:58:03.234991539 +0000 UTC m=+0.042111134 container create aec6b2e17e6787b0353702aa107a55e0f38d452183104d9434c11135cdbc6d0b (image=quay.io/ceph/ceph:v20, name=flamboyant_goldwasser, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 04:58:03 np0005603787 systemd[1]: Started libpod-conmon-f1c648ecea75ffe84dd5ac19f9b8df291a68ccec35e481340b9efbf4befe2afe.scope.
Jan 31 04:58:03 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:03 np0005603787 systemd[1]: Started libpod-conmon-aec6b2e17e6787b0353702aa107a55e0f38d452183104d9434c11135cdbc6d0b.scope.
Jan 31 04:58:03 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:03 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d43bdd4ee72f776911acb06eec1fed46fa118b97b43cdbf1a8112a91cd9d5a4d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:03 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d43bdd4ee72f776911acb06eec1fed46fa118b97b43cdbf1a8112a91cd9d5a4d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:03 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d43bdd4ee72f776911acb06eec1fed46fa118b97b43cdbf1a8112a91cd9d5a4d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:03 np0005603787 podman[80289]: 2026-01-31 09:58:03.279830166 +0000 UTC m=+0.117255274 container init f1c648ecea75ffe84dd5ac19f9b8df291a68ccec35e481340b9efbf4befe2afe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_allen, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 04:58:03 np0005603787 podman[80289]: 2026-01-31 09:58:03.185573848 +0000 UTC m=+0.022998966 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:58:03 np0005603787 podman[80289]: 2026-01-31 09:58:03.292157661 +0000 UTC m=+0.129582759 container start f1c648ecea75ffe84dd5ac19f9b8df291a68ccec35e481340b9efbf4befe2afe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 04:58:03 np0005603787 podman[80303]: 2026-01-31 09:58:03.29546148 +0000 UTC m=+0.102581115 container init aec6b2e17e6787b0353702aa107a55e0f38d452183104d9434c11135cdbc6d0b (image=quay.io/ceph/ceph:v20, name=flamboyant_goldwasser, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 04:58:03 np0005603787 loving_allen[80319]: 167 167
Jan 31 04:58:03 np0005603787 systemd[1]: libpod-f1c648ecea75ffe84dd5ac19f9b8df291a68ccec35e481340b9efbf4befe2afe.scope: Deactivated successfully.
Jan 31 04:58:03 np0005603787 podman[80289]: 2026-01-31 09:58:03.298575146 +0000 UTC m=+0.136000244 container attach f1c648ecea75ffe84dd5ac19f9b8df291a68ccec35e481340b9efbf4befe2afe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_allen, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:58:03 np0005603787 podman[80289]: 2026-01-31 09:58:03.298967546 +0000 UTC m=+0.136392644 container died f1c648ecea75ffe84dd5ac19f9b8df291a68ccec35e481340b9efbf4befe2afe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 04:58:03 np0005603787 podman[80303]: 2026-01-31 09:58:03.299438908 +0000 UTC m=+0.106558513 container start aec6b2e17e6787b0353702aa107a55e0f38d452183104d9434c11135cdbc6d0b (image=quay.io/ceph/ceph:v20, name=flamboyant_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 04:58:03 np0005603787 podman[80303]: 2026-01-31 09:58:03.212578821 +0000 UTC m=+0.019698446 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:58:03 np0005603787 podman[80303]: 2026-01-31 09:58:03.312886514 +0000 UTC m=+0.120006139 container attach aec6b2e17e6787b0353702aa107a55e0f38d452183104d9434c11135cdbc6d0b (image=quay.io/ceph/ceph:v20, name=flamboyant_goldwasser, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:58:03 np0005603787 systemd[1]: var-lib-containers-storage-overlay-640bb87ac367718b6b6ee79c935fbd50ef6c680d6293021e3aad6b4bfeb08f3e-merged.mount: Deactivated successfully.
Jan 31 04:58:03 np0005603787 podman[80289]: 2026-01-31 09:58:03.339577648 +0000 UTC m=+0.177002736 container remove f1c648ecea75ffe84dd5ac19f9b8df291a68ccec35e481340b9efbf4befe2afe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:58:03 np0005603787 systemd[1]: libpod-conmon-f1c648ecea75ffe84dd5ac19f9b8df291a68ccec35e481340b9efbf4befe2afe.scope: Deactivated successfully.
Jan 31 04:58:03 np0005603787 systemd[1]: Reloading.
Jan 31 04:58:03 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 04:58:03 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:58:03 np0005603787 systemd[1]: Reloading.
Jan 31 04:58:03 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:58:03 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 04:58:03 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 04:58:03 np0005603787 flamboyant_goldwasser[80324]: 
Jan 31 04:58:03 np0005603787 flamboyant_goldwasser[80324]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 31 04:58:03 np0005603787 podman[80303]: 2026-01-31 09:58:03.733718847 +0000 UTC m=+0.540838462 container died aec6b2e17e6787b0353702aa107a55e0f38d452183104d9434c11135cdbc6d0b (image=quay.io/ceph/ceph:v20, name=flamboyant_goldwasser, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 04:58:03 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:03 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:03 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:03 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:03 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:03 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.eqrvct", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 31 04:58:03 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.eqrvct", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 31 04:58:03 np0005603787 ceph-mon[75160]: Deploying daemon mgr.compute-0.eqrvct on compute-0
Jan 31 04:58:03 np0005603787 ceph-mon[75160]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 31 04:58:03 np0005603787 ceph-mgr[75453]: [progress INFO root] Writing back 1 completed events
Jan 31 04:58:03 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 31 04:58:03 np0005603787 ceph-mgr[75453]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 04:58:03 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:03 np0005603787 systemd[1]: libpod-aec6b2e17e6787b0353702aa107a55e0f38d452183104d9434c11135cdbc6d0b.scope: Deactivated successfully.
Jan 31 04:58:03 np0005603787 systemd[1]: var-lib-containers-storage-overlay-d43bdd4ee72f776911acb06eec1fed46fa118b97b43cdbf1a8112a91cd9d5a4d-merged.mount: Deactivated successfully.
Jan 31 04:58:03 np0005603787 systemd[1]: Starting Ceph mgr.compute-0.eqrvct for 962d77ae-dc67-5de8-89d8-3d1670c67b61...
Jan 31 04:58:03 np0005603787 podman[80303]: 2026-01-31 09:58:03.872845474 +0000 UTC m=+0.679965069 container remove aec6b2e17e6787b0353702aa107a55e0f38d452183104d9434c11135cdbc6d0b (image=quay.io/ceph/ceph:v20, name=flamboyant_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:58:03 np0005603787 systemd[1]: libpod-conmon-aec6b2e17e6787b0353702aa107a55e0f38d452183104d9434c11135cdbc6d0b.scope: Deactivated successfully.
Jan 31 04:58:04 np0005603787 podman[80500]: 2026-01-31 09:58:04.02339011 +0000 UTC m=+0.035334350 container create db1ea025034bda8574ccf32e1677cd425f1f571639f6d3cb2fd9e2e07f6698b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mgr-compute-0-eqrvct, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:58:04 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1243da6a2926922c52731e4794dc81bc04a419d53794fa8eaed3a4c3f1efd33/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:04 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1243da6a2926922c52731e4794dc81bc04a419d53794fa8eaed3a4c3f1efd33/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:04 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1243da6a2926922c52731e4794dc81bc04a419d53794fa8eaed3a4c3f1efd33/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:04 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1243da6a2926922c52731e4794dc81bc04a419d53794fa8eaed3a4c3f1efd33/merged/var/lib/ceph/mgr/ceph-compute-0.eqrvct supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:04 np0005603787 podman[80500]: 2026-01-31 09:58:04.070970622 +0000 UTC m=+0.082914882 container init db1ea025034bda8574ccf32e1677cd425f1f571639f6d3cb2fd9e2e07f6698b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mgr-compute-0-eqrvct, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:58:04 np0005603787 podman[80500]: 2026-01-31 09:58:04.074423886 +0000 UTC m=+0.086368126 container start db1ea025034bda8574ccf32e1677cd425f1f571639f6d3cb2fd9e2e07f6698b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mgr-compute-0-eqrvct, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030)
Jan 31 04:58:04 np0005603787 bash[80500]: db1ea025034bda8574ccf32e1677cd425f1f571639f6d3cb2fd9e2e07f6698b3
Jan 31 04:58:04 np0005603787 podman[80500]: 2026-01-31 09:58:04.006938633 +0000 UTC m=+0.018882893 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:58:04 np0005603787 systemd[1]: Started Ceph mgr.compute-0.eqrvct for 962d77ae-dc67-5de8-89d8-3d1670c67b61.
Jan 31 04:58:04 np0005603787 ceph-mgr[80520]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 04:58:04 np0005603787 ceph-mgr[80520]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 31 04:58:04 np0005603787 ceph-mgr[80520]: pidfile_write: ignore empty --pid-file
Jan 31 04:58:04 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 04:58:04 np0005603787 ceph-mgr[80520]: mgr[py] Loading python module 'alerts'
Jan 31 04:58:04 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:04 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 04:58:04 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:04 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 31 04:58:04 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:04 np0005603787 ceph-mgr[75453]: [progress INFO root] complete: finished ev 913a115c-5b98-4e23-aead-430463edd6d0 (Updating mgr deployment (+1 -> 2))
Jan 31 04:58:04 np0005603787 ceph-mgr[75453]: [progress INFO root] Completed event 913a115c-5b98-4e23-aead-430463edd6d0 (Updating mgr deployment (+1 -> 2)) in 1 seconds
Jan 31 04:58:04 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 31 04:58:04 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:04 np0005603787 ceph-mgr[80520]: mgr[py] Loading python module 'balancer'
Jan 31 04:58:04 np0005603787 python3[80589]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:58:04 np0005603787 ceph-mgr[80520]: mgr[py] Loading python module 'cephadm'
Jan 31 04:58:04 np0005603787 podman[80642]: 2026-01-31 09:58:04.360632795 +0000 UTC m=+0.038403483 container create f4ba305ab8e7c44af8ddca9cd93ab7d519c2d226df0630e46386367cd2c547d2 (image=quay.io/ceph/ceph:v20, name=infallible_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 04:58:04 np0005603787 systemd[1]: Started libpod-conmon-f4ba305ab8e7c44af8ddca9cd93ab7d519c2d226df0630e46386367cd2c547d2.scope.
Jan 31 04:58:04 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:04 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8fefb9a9c51d4030d306f1b13deb3fd4067019e49a7029f51ffe8488ab40d27/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:04 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8fefb9a9c51d4030d306f1b13deb3fd4067019e49a7029f51ffe8488ab40d27/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:04 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8fefb9a9c51d4030d306f1b13deb3fd4067019e49a7029f51ffe8488ab40d27/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:04 np0005603787 podman[80642]: 2026-01-31 09:58:04.43634152 +0000 UTC m=+0.114112208 container init f4ba305ab8e7c44af8ddca9cd93ab7d519c2d226df0630e46386367cd2c547d2 (image=quay.io/ceph/ceph:v20, name=infallible_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 04:58:04 np0005603787 podman[80642]: 2026-01-31 09:58:04.344293651 +0000 UTC m=+0.022064339 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:58:04 np0005603787 podman[80642]: 2026-01-31 09:58:04.44149625 +0000 UTC m=+0.119266918 container start f4ba305ab8e7c44af8ddca9cd93ab7d519c2d226df0630e46386367cd2c547d2 (image=quay.io/ceph/ceph:v20, name=infallible_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True)
Jan 31 04:58:04 np0005603787 podman[80642]: 2026-01-31 09:58:04.445230011 +0000 UTC m=+0.123000679 container attach f4ba305ab8e7c44af8ddca9cd93ab7d519c2d226df0630e46386367cd2c547d2 (image=quay.io/ceph/ceph:v20, name=infallible_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:58:04 np0005603787 podman[80725]: 2026-01-31 09:58:04.643131123 +0000 UTC m=+0.051282183 container exec 1cb6a2ad0c52f65a03512fc45c5f9abf84541c639633c47899a99e7122aa7891 (image=quay.io/ceph/ceph:v20, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True)
Jan 31 04:58:04 np0005603787 podman[80725]: 2026-01-31 09:58:04.740344002 +0000 UTC m=+0.148495042 container exec_died 1cb6a2ad0c52f65a03512fc45c5f9abf84541c639633c47899a99e7122aa7891 (image=quay.io/ceph/ceph:v20, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:58:04 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:04 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:04 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:04 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:04 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:04 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Jan 31 04:58:04 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1416861841' entity='client.admin' 
Jan 31 04:58:04 np0005603787 systemd[1]: libpod-f4ba305ab8e7c44af8ddca9cd93ab7d519c2d226df0630e46386367cd2c547d2.scope: Deactivated successfully.
Jan 31 04:58:04 np0005603787 podman[80802]: 2026-01-31 09:58:04.92594086 +0000 UTC m=+0.028168185 container died f4ba305ab8e7c44af8ddca9cd93ab7d519c2d226df0630e46386367cd2c547d2 (image=quay.io/ceph/ceph:v20, name=infallible_brahmagupta, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 04:58:04 np0005603787 systemd[1]: var-lib-containers-storage-overlay-d8fefb9a9c51d4030d306f1b13deb3fd4067019e49a7029f51ffe8488ab40d27-merged.mount: Deactivated successfully.
Jan 31 04:58:04 np0005603787 podman[80802]: 2026-01-31 09:58:04.975132965 +0000 UTC m=+0.077360260 container remove f4ba305ab8e7c44af8ddca9cd93ab7d519c2d226df0630e46386367cd2c547d2 (image=quay.io/ceph/ceph:v20, name=infallible_brahmagupta, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 04:58:04 np0005603787 systemd[1]: libpod-conmon-f4ba305ab8e7c44af8ddca9cd93ab7d519c2d226df0630e46386367cd2c547d2.scope: Deactivated successfully.
Jan 31 04:58:05 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 04:58:05 np0005603787 ceph-mgr[80520]: mgr[py] Loading python module 'crash'
Jan 31 04:58:05 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 04:58:05 np0005603787 ceph-mgr[80520]: mgr[py] Loading python module 'dashboard'
Jan 31 04:58:05 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:05 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 04:58:05 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:05 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 04:58:05 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 04:58:05 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 04:58:05 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 04:58:05 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 04:58:05 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:05 np0005603787 ceph-mgr[75453]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Jan 31 04:58:05 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Jan 31 04:58:05 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 31 04:58:05 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Jan 31 04:58:05 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 31 04:58:05 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Jan 31 04:58:05 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 04:58:05 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 04:58:05 np0005603787 ceph-mgr[75453]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 31 04:58:05 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 31 04:58:05 np0005603787 python3[80891]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:58:05 np0005603787 podman[80930]: 2026-01-31 09:58:05.317360185 +0000 UTC m=+0.043963294 container create 16900fd8e5911ba928c330779efc753f3f70c7e088244685e7566d33ab67b202 (image=quay.io/ceph/ceph:v20, name=affectionate_lehmann, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:58:05 np0005603787 systemd[1]: Started libpod-conmon-16900fd8e5911ba928c330779efc753f3f70c7e088244685e7566d33ab67b202.scope.
Jan 31 04:58:05 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:05 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b0b0eecd443cd4cd61bbaf05ab2f9cae06e4e67176e4ac1d9f25173f37ee8b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:05 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b0b0eecd443cd4cd61bbaf05ab2f9cae06e4e67176e4ac1d9f25173f37ee8b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:05 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b0b0eecd443cd4cd61bbaf05ab2f9cae06e4e67176e4ac1d9f25173f37ee8b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:05 np0005603787 podman[80930]: 2026-01-31 09:58:05.386878392 +0000 UTC m=+0.113481521 container init 16900fd8e5911ba928c330779efc753f3f70c7e088244685e7566d33ab67b202 (image=quay.io/ceph/ceph:v20, name=affectionate_lehmann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:58:05 np0005603787 podman[80930]: 2026-01-31 09:58:05.391527718 +0000 UTC m=+0.118130827 container start 16900fd8e5911ba928c330779efc753f3f70c7e088244685e7566d33ab67b202 (image=quay.io/ceph/ceph:v20, name=affectionate_lehmann, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:58:05 np0005603787 podman[80930]: 2026-01-31 09:58:05.395170977 +0000 UTC m=+0.121774086 container attach 16900fd8e5911ba928c330779efc753f3f70c7e088244685e7566d33ab67b202 (image=quay.io/ceph/ceph:v20, name=affectionate_lehmann, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:58:05 np0005603787 podman[80930]: 2026-01-31 09:58:05.301153926 +0000 UTC m=+0.027757055 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:58:05 np0005603787 ansible-async_wrapper.py[79379]: Done in kid B.
Jan 31 04:58:05 np0005603787 podman[81019]: 2026-01-31 09:58:05.629843238 +0000 UTC m=+0.048191240 container create 99fe23f4ee241444857af2725c69381374d9e5b7e06c2dc87e65db705c806696 (image=quay.io/ceph/ceph:v20, name=blissful_bassi, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:58:05 np0005603787 systemd[1]: Started libpod-conmon-99fe23f4ee241444857af2725c69381374d9e5b7e06c2dc87e65db705c806696.scope.
Jan 31 04:58:05 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:05 np0005603787 podman[81019]: 2026-01-31 09:58:05.690688619 +0000 UTC m=+0.109036621 container init 99fe23f4ee241444857af2725c69381374d9e5b7e06c2dc87e65db705c806696 (image=quay.io/ceph/ceph:v20, name=blissful_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 04:58:05 np0005603787 podman[81019]: 2026-01-31 09:58:05.695814139 +0000 UTC m=+0.114162141 container start 99fe23f4ee241444857af2725c69381374d9e5b7e06c2dc87e65db705c806696 (image=quay.io/ceph/ceph:v20, name=blissful_bassi, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Jan 31 04:58:05 np0005603787 podman[81019]: 2026-01-31 09:58:05.599919755 +0000 UTC m=+0.018267787 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:58:05 np0005603787 blissful_bassi[81034]: 167 167
Jan 31 04:58:05 np0005603787 podman[81019]: 2026-01-31 09:58:05.699246861 +0000 UTC m=+0.117594863 container attach 99fe23f4ee241444857af2725c69381374d9e5b7e06c2dc87e65db705c806696 (image=quay.io/ceph/ceph:v20, name=blissful_bassi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 04:58:05 np0005603787 systemd[1]: libpod-99fe23f4ee241444857af2725c69381374d9e5b7e06c2dc87e65db705c806696.scope: Deactivated successfully.
Jan 31 04:58:05 np0005603787 podman[81019]: 2026-01-31 09:58:05.69956938 +0000 UTC m=+0.117917382 container died 99fe23f4ee241444857af2725c69381374d9e5b7e06c2dc87e65db705c806696 (image=quay.io/ceph/ceph:v20, name=blissful_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:58:05 np0005603787 systemd[1]: var-lib-containers-storage-overlay-dcc2c98f5c9f41989c3eda16219ec9611e17013c69380396053d2cb2cbf05240-merged.mount: Deactivated successfully.
Jan 31 04:58:05 np0005603787 podman[81019]: 2026-01-31 09:58:05.749690541 +0000 UTC m=+0.168038543 container remove 99fe23f4ee241444857af2725c69381374d9e5b7e06c2dc87e65db705c806696 (image=quay.io/ceph/ceph:v20, name=blissful_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 04:58:05 np0005603787 systemd[1]: libpod-conmon-99fe23f4ee241444857af2725c69381374d9e5b7e06c2dc87e65db705c806696.scope: Deactivated successfully.
Jan 31 04:58:05 np0005603787 ceph-mgr[75453]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 04:58:05 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 04:58:05 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Jan 31 04:58:05 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:05 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 04:58:05 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2747659567' entity='client.admin' 
Jan 31 04:58:05 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:05 np0005603787 ceph-mgr[75453]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.mdmqaq (unknown last config time)...
Jan 31 04:58:05 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.mdmqaq (unknown last config time)...
Jan 31 04:58:05 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.mdmqaq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 31 04:58:05 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.mdmqaq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 31 04:58:05 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 31 04:58:05 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "mgr services"} : dispatch
Jan 31 04:58:05 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 04:58:05 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 04:58:05 np0005603787 ceph-mgr[75453]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.mdmqaq on compute-0
Jan 31 04:58:05 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.mdmqaq on compute-0
Jan 31 04:58:05 np0005603787 systemd[1]: libpod-16900fd8e5911ba928c330779efc753f3f70c7e088244685e7566d33ab67b202.scope: Deactivated successfully.
Jan 31 04:58:05 np0005603787 podman[80930]: 2026-01-31 09:58:05.84578353 +0000 UTC m=+0.572386639 container died 16900fd8e5911ba928c330779efc753f3f70c7e088244685e7566d33ab67b202 (image=quay.io/ceph/ceph:v20, name=affectionate_lehmann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 04:58:05 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/1416861841' entity='client.admin' 
Jan 31 04:58:05 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:05 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:05 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 04:58:05 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:05 np0005603787 ceph-mon[75160]: Reconfiguring mon.compute-0 (unknown last config time)...
Jan 31 04:58:05 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Jan 31 04:58:05 np0005603787 ceph-mon[75160]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 31 04:58:05 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:05 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/2747659567' entity='client.admin' 
Jan 31 04:58:05 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:05 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.mdmqaq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 31 04:58:05 np0005603787 systemd[1]: var-lib-containers-storage-overlay-d9b0b0eecd443cd4cd61bbaf05ab2f9cae06e4e67176e4ac1d9f25173f37ee8b-merged.mount: Deactivated successfully.
Jan 31 04:58:05 np0005603787 ceph-mgr[80520]: mgr[py] Loading python module 'devicehealth'
Jan 31 04:58:05 np0005603787 podman[80930]: 2026-01-31 09:58:05.908115141 +0000 UTC m=+0.634718250 container remove 16900fd8e5911ba928c330779efc753f3f70c7e088244685e7566d33ab67b202 (image=quay.io/ceph/ceph:v20, name=affectionate_lehmann, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 04:58:05 np0005603787 systemd[1]: libpod-conmon-16900fd8e5911ba928c330779efc753f3f70c7e088244685e7566d33ab67b202.scope: Deactivated successfully.
Jan 31 04:58:05 np0005603787 ceph-mgr[80520]: mgr[py] Loading python module 'diskprediction_local'
Jan 31 04:58:06 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mgr-compute-0-eqrvct[80516]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 31 04:58:06 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mgr-compute-0-eqrvct[80516]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 31 04:58:06 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mgr-compute-0-eqrvct[80516]:  from numpy import show_config as show_numpy_config
Jan 31 04:58:06 np0005603787 ceph-mgr[80520]: mgr[py] Loading python module 'influx'
Jan 31 04:58:06 np0005603787 python3[81138]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:58:06 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 04:58:06 np0005603787 podman[81154]: 2026-01-31 09:58:06.222782103 +0000 UTC m=+0.047787398 container create 0f52fdd9a76413f14d2986439227b66d95882d5f788a9d99c3aa5fc7787671b4 (image=quay.io/ceph/ceph:v20, name=angry_nightingale, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:58:06 np0005603787 systemd[1]: Started libpod-conmon-0f52fdd9a76413f14d2986439227b66d95882d5f788a9d99c3aa5fc7787671b4.scope.
Jan 31 04:58:06 np0005603787 ceph-mgr[80520]: mgr[py] Loading python module 'insights'
Jan 31 04:58:06 np0005603787 podman[81168]: 2026-01-31 09:58:06.266697375 +0000 UTC m=+0.053876833 container create 2b8f817cab09dc14b1f2a07e8c0bbf95d9b5968cf40405321ebeab65960f3ecf (image=quay.io/ceph/ceph:v20, name=gifted_volhard, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 04:58:06 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:06 np0005603787 podman[81154]: 2026-01-31 09:58:06.283362067 +0000 UTC m=+0.108367182 container init 0f52fdd9a76413f14d2986439227b66d95882d5f788a9d99c3aa5fc7787671b4 (image=quay.io/ceph/ceph:v20, name=angry_nightingale, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:58:06 np0005603787 podman[81154]: 2026-01-31 09:58:06.288129347 +0000 UTC m=+0.113134442 container start 0f52fdd9a76413f14d2986439227b66d95882d5f788a9d99c3aa5fc7787671b4 (image=quay.io/ceph/ceph:v20, name=angry_nightingale, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 04:58:06 np0005603787 systemd[1]: Started libpod-conmon-2b8f817cab09dc14b1f2a07e8c0bbf95d9b5968cf40405321ebeab65960f3ecf.scope.
Jan 31 04:58:06 np0005603787 podman[81154]: 2026-01-31 09:58:06.203108009 +0000 UTC m=+0.028113124 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:58:06 np0005603787 angry_nightingale[81183]: 167 167
Jan 31 04:58:06 np0005603787 systemd[1]: libpod-0f52fdd9a76413f14d2986439227b66d95882d5f788a9d99c3aa5fc7787671b4.scope: Deactivated successfully.
Jan 31 04:58:06 np0005603787 podman[81154]: 2026-01-31 09:58:06.293665477 +0000 UTC m=+0.118670582 container attach 0f52fdd9a76413f14d2986439227b66d95882d5f788a9d99c3aa5fc7787671b4 (image=quay.io/ceph/ceph:v20, name=angry_nightingale, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 04:58:06 np0005603787 podman[81154]: 2026-01-31 09:58:06.293955475 +0000 UTC m=+0.118960570 container died 0f52fdd9a76413f14d2986439227b66d95882d5f788a9d99c3aa5fc7787671b4 (image=quay.io/ceph/ceph:v20, name=angry_nightingale, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:58:06 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:06 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a39deeb78d180a2ac4882573c4405abc20adbd782893096be5521b972e313514/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:06 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a39deeb78d180a2ac4882573c4405abc20adbd782893096be5521b972e313514/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:06 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a39deeb78d180a2ac4882573c4405abc20adbd782893096be5521b972e313514/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:06 np0005603787 podman[81154]: 2026-01-31 09:58:06.342465272 +0000 UTC m=+0.167470407 container remove 0f52fdd9a76413f14d2986439227b66d95882d5f788a9d99c3aa5fc7787671b4 (image=quay.io/ceph/ceph:v20, name=angry_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 04:58:06 np0005603787 podman[81168]: 2026-01-31 09:58:06.245719705 +0000 UTC m=+0.032899193 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:58:06 np0005603787 ceph-mgr[80520]: mgr[py] Loading python module 'iostat'
Jan 31 04:58:06 np0005603787 podman[81168]: 2026-01-31 09:58:06.350359096 +0000 UTC m=+0.137538584 container init 2b8f817cab09dc14b1f2a07e8c0bbf95d9b5968cf40405321ebeab65960f3ecf (image=quay.io/ceph/ceph:v20, name=gifted_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 04:58:06 np0005603787 podman[81168]: 2026-01-31 09:58:06.355617109 +0000 UTC m=+0.142796567 container start 2b8f817cab09dc14b1f2a07e8c0bbf95d9b5968cf40405321ebeab65960f3ecf (image=quay.io/ceph/ceph:v20, name=gifted_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 04:58:06 np0005603787 podman[81168]: 2026-01-31 09:58:06.359258108 +0000 UTC m=+0.146437576 container attach 2b8f817cab09dc14b1f2a07e8c0bbf95d9b5968cf40405321ebeab65960f3ecf (image=quay.io/ceph/ceph:v20, name=gifted_volhard, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 04:58:06 np0005603787 systemd[1]: libpod-conmon-0f52fdd9a76413f14d2986439227b66d95882d5f788a9d99c3aa5fc7787671b4.scope: Deactivated successfully.
Jan 31 04:58:06 np0005603787 ceph-mgr[80520]: mgr[py] Loading python module 'k8sevents'
Jan 31 04:58:06 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 04:58:06 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:06 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 04:58:06 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:06 np0005603787 systemd[1]: var-lib-containers-storage-overlay-3fb5695941df0ad2d72f461ffd8b4480aa57cb8ca1b80b1e54cb1d3443f69539-merged.mount: Deactivated successfully.
Jan 31 04:58:06 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Jan 31 04:58:06 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1560414008' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Jan 31 04:58:06 np0005603787 ceph-mgr[80520]: mgr[py] Loading python module 'localpool'
Jan 31 04:58:06 np0005603787 ceph-mgr[80520]: mgr[py] Loading python module 'mds_autoscaler'
Jan 31 04:58:07 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 04:58:07 np0005603787 podman[81321]: 2026-01-31 09:58:07.073705772 +0000 UTC m=+0.252743773 container exec 1cb6a2ad0c52f65a03512fc45c5f9abf84541c639633c47899a99e7122aa7891 (image=quay.io/ceph/ceph:v20, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Jan 31 04:58:07 np0005603787 ceph-mgr[80520]: mgr[py] Loading python module 'mirroring'
Jan 31 04:58:07 np0005603787 ceph-mgr[80520]: mgr[py] Loading python module 'nfs'
Jan 31 04:58:07 np0005603787 podman[81321]: 2026-01-31 09:58:07.180509451 +0000 UTC m=+0.359547452 container exec_died 1cb6a2ad0c52f65a03512fc45c5f9abf84541c639633c47899a99e7122aa7891 (image=quay.io/ceph/ceph:v20, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mon-compute-0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:58:07 np0005603787 ceph-mgr[80520]: mgr[py] Loading python module 'orchestrator'
Jan 31 04:58:07 np0005603787 ceph-mon[75160]: Reconfiguring mgr.compute-0.mdmqaq (unknown last config time)...
Jan 31 04:58:07 np0005603787 ceph-mon[75160]: Reconfiguring daemon mgr.compute-0.mdmqaq on compute-0
Jan 31 04:58:07 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:07 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:07 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/1560414008' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Jan 31 04:58:07 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Jan 31 04:58:07 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 04:58:07 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1560414008' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 31 04:58:07 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Jan 31 04:58:07 np0005603787 gifted_volhard[81190]: set require_min_compat_client to mimic
Jan 31 04:58:07 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Jan 31 04:58:07 np0005603787 systemd[1]: libpod-2b8f817cab09dc14b1f2a07e8c0bbf95d9b5968cf40405321ebeab65960f3ecf.scope: Deactivated successfully.
Jan 31 04:58:07 np0005603787 podman[81168]: 2026-01-31 09:58:07.469981689 +0000 UTC m=+1.257161177 container died 2b8f817cab09dc14b1f2a07e8c0bbf95d9b5968cf40405321ebeab65960f3ecf (image=quay.io/ceph/ceph:v20, name=gifted_volhard, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 04:58:07 np0005603787 systemd[1]: var-lib-containers-storage-overlay-a39deeb78d180a2ac4882573c4405abc20adbd782893096be5521b972e313514-merged.mount: Deactivated successfully.
Jan 31 04:58:07 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 04:58:07 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:07 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 04:58:07 np0005603787 podman[81168]: 2026-01-31 09:58:07.515691539 +0000 UTC m=+1.302870997 container remove 2b8f817cab09dc14b1f2a07e8c0bbf95d9b5968cf40405321ebeab65960f3ecf (image=quay.io/ceph/ceph:v20, name=gifted_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 04:58:07 np0005603787 systemd[1]: libpod-conmon-2b8f817cab09dc14b1f2a07e8c0bbf95d9b5968cf40405321ebeab65960f3ecf.scope: Deactivated successfully.
Jan 31 04:58:07 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:07 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 04:58:07 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 04:58:07 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 04:58:07 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 04:58:07 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 04:58:07 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:07 np0005603787 ceph-mgr[80520]: mgr[py] Loading python module 'osd_perf_query'
Jan 31 04:58:07 np0005603787 ceph-mgr[80520]: mgr[py] Loading python module 'osd_support'
Jan 31 04:58:07 np0005603787 ceph-mgr[80520]: mgr[py] Loading python module 'pg_autoscaler'
Jan 31 04:58:07 np0005603787 ceph-mgr[75453]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 04:58:07 np0005603787 ceph-mgr[80520]: mgr[py] Loading python module 'progress'
Jan 31 04:58:07 np0005603787 ceph-mgr[80520]: mgr[py] Loading python module 'prometheus'
Jan 31 04:58:08 np0005603787 python3[81503]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:58:08 np0005603787 podman[81504]: 2026-01-31 09:58:08.072470603 +0000 UTC m=+0.035884626 container create e539c922ec48699d62201b0bc8f44af3dbac3b557b988ec05ad417a6231e96d3 (image=quay.io/ceph/ceph:v20, name=eager_snyder, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:58:08 np0005603787 systemd[1]: Started libpod-conmon-e539c922ec48699d62201b0bc8f44af3dbac3b557b988ec05ad417a6231e96d3.scope.
Jan 31 04:58:08 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:08 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f152f59c702ad2daa2a4fde8c6db34897b1972e7ed3cd64fdec8b13467a2c57/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:08 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f152f59c702ad2daa2a4fde8c6db34897b1972e7ed3cd64fdec8b13467a2c57/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:08 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f152f59c702ad2daa2a4fde8c6db34897b1972e7ed3cd64fdec8b13467a2c57/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:08 np0005603787 podman[81504]: 2026-01-31 09:58:08.134537007 +0000 UTC m=+0.097951030 container init e539c922ec48699d62201b0bc8f44af3dbac3b557b988ec05ad417a6231e96d3 (image=quay.io/ceph/ceph:v20, name=eager_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 31 04:58:08 np0005603787 podman[81504]: 2026-01-31 09:58:08.140047977 +0000 UTC m=+0.103462000 container start e539c922ec48699d62201b0bc8f44af3dbac3b557b988ec05ad417a6231e96d3 (image=quay.io/ceph/ceph:v20, name=eager_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:58:08 np0005603787 podman[81504]: 2026-01-31 09:58:08.143992874 +0000 UTC m=+0.107406987 container attach e539c922ec48699d62201b0bc8f44af3dbac3b557b988ec05ad417a6231e96d3 (image=quay.io/ceph/ceph:v20, name=eager_snyder, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:58:08 np0005603787 podman[81504]: 2026-01-31 09:58:08.055714108 +0000 UTC m=+0.019128151 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:58:08 np0005603787 ceph-mgr[80520]: mgr[py] Loading python module 'rbd_support'
Jan 31 04:58:08 np0005603787 ceph-mgr[80520]: mgr[py] Loading python module 'rgw'
Jan 31 04:58:08 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/1560414008' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 31 04:58:08 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:08 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:08 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 04:58:08 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:08 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:58:08 np0005603787 ceph-mgr[80520]: mgr[py] Loading python module 'rook'
Jan 31 04:58:08 np0005603787 ceph-mgr[75453]: [progress INFO root] Writing back 2 completed events
Jan 31 04:58:08 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 31 04:58:08 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:08 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 31 04:58:08 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:08 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 31 04:58:08 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:08 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 31 04:58:08 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:08 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 31 04:58:08 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:08 np0005603787 ceph-mgr[75453]: [cephadm INFO root] Added host compute-0
Jan 31 04:58:08 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 31 04:58:08 np0005603787 ceph-mgr[75453]: [cephadm INFO root] Saving service mon spec with placement compute-0
Jan 31 04:58:08 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 04:58:08 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Jan 31 04:58:08 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 04:58:08 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 31 04:58:08 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 04:58:08 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 04:58:09 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:09 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 04:58:09 np0005603787 ceph-mgr[75453]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Jan 31 04:58:09 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Jan 31 04:58:09 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 31 04:58:09 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:09 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 31 04:58:09 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:09 np0005603787 ceph-mgr[75453]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Jan 31 04:58:09 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Jan 31 04:58:09 np0005603787 ceph-mgr[75453]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Jan 31 04:58:09 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Jan 31 04:58:09 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Jan 31 04:58:09 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:09 np0005603787 ceph-mgr[75453]: [progress INFO root] update: starting ev 09041982-d7b8-42d0-a298-01b81d588f47 (Updating mgr deployment (-1 -> 1))
Jan 31 04:58:09 np0005603787 ceph-mgr[75453]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.eqrvct from compute-0 -- ports [8765]
Jan 31 04:58:09 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.eqrvct from compute-0 -- ports [8765]
Jan 31 04:58:09 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:09 np0005603787 eager_snyder[81520]: Added host 'compute-0' with addr '192.168.122.100'
Jan 31 04:58:09 np0005603787 eager_snyder[81520]: Scheduled mon update...
Jan 31 04:58:09 np0005603787 eager_snyder[81520]: Scheduled mgr update...
Jan 31 04:58:09 np0005603787 eager_snyder[81520]: Scheduled osd.default_drive_group update...
Jan 31 04:58:09 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 04:58:09 np0005603787 systemd[1]: libpod-e539c922ec48699d62201b0bc8f44af3dbac3b557b988ec05ad417a6231e96d3.scope: Deactivated successfully.
Jan 31 04:58:09 np0005603787 podman[81504]: 2026-01-31 09:58:09.045185548 +0000 UTC m=+1.008599571 container died e539c922ec48699d62201b0bc8f44af3dbac3b557b988ec05ad417a6231e96d3 (image=quay.io/ceph/ceph:v20, name=eager_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 04:58:09 np0005603787 systemd[1]: var-lib-containers-storage-overlay-7f152f59c702ad2daa2a4fde8c6db34897b1972e7ed3cd64fdec8b13467a2c57-merged.mount: Deactivated successfully.
Jan 31 04:58:09 np0005603787 podman[81504]: 2026-01-31 09:58:09.085325437 +0000 UTC m=+1.048739460 container remove e539c922ec48699d62201b0bc8f44af3dbac3b557b988ec05ad417a6231e96d3 (image=quay.io/ceph/ceph:v20, name=eager_snyder, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 31 04:58:09 np0005603787 systemd[1]: libpod-conmon-e539c922ec48699d62201b0bc8f44af3dbac3b557b988ec05ad417a6231e96d3.scope: Deactivated successfully.
Jan 31 04:58:09 np0005603787 ceph-mgr[80520]: mgr[py] Loading python module 'selftest'
Jan 31 04:58:09 np0005603787 ceph-mgr[80520]: mgr[py] Loading python module 'smb'
Jan 31 04:58:09 np0005603787 systemd[1]: Stopping Ceph mgr.compute-0.eqrvct for 962d77ae-dc67-5de8-89d8-3d1670c67b61...
Jan 31 04:58:09 np0005603787 python3[81703]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:58:09 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:09 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:09 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:09 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:09 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:09 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 04:58:09 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:09 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:09 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:09 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:09 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:09 np0005603787 podman[81729]: 2026-01-31 09:58:09.475000985 +0000 UTC m=+0.036586284 container create ff1e01925d55d26c93732b9d058ff336defb7501df9a4a8d3567708c133bb519 (image=quay.io/ceph/ceph:v20, name=wizardly_albattani, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:58:09 np0005603787 systemd[1]: Started libpod-conmon-ff1e01925d55d26c93732b9d058ff336defb7501df9a4a8d3567708c133bb519.scope.
Jan 31 04:58:09 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:09 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d296484beabeea088f0393e6f0056bad98bc7e8dc2e77a78581411d052873eae/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:09 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d296484beabeea088f0393e6f0056bad98bc7e8dc2e77a78581411d052873eae/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:09 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d296484beabeea088f0393e6f0056bad98bc7e8dc2e77a78581411d052873eae/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:09 np0005603787 podman[81729]: 2026-01-31 09:58:09.457894271 +0000 UTC m=+0.019479590 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:58:09 np0005603787 podman[81729]: 2026-01-31 09:58:09.556454576 +0000 UTC m=+0.118039895 container init ff1e01925d55d26c93732b9d058ff336defb7501df9a4a8d3567708c133bb519 (image=quay.io/ceph/ceph:v20, name=wizardly_albattani, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:58:09 np0005603787 podman[81729]: 2026-01-31 09:58:09.560522837 +0000 UTC m=+0.122108136 container start ff1e01925d55d26c93732b9d058ff336defb7501df9a4a8d3567708c133bb519 (image=quay.io/ceph/ceph:v20, name=wizardly_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 04:58:09 np0005603787 podman[81729]: 2026-01-31 09:58:09.564920976 +0000 UTC m=+0.126506275 container attach ff1e01925d55d26c93732b9d058ff336defb7501df9a4a8d3567708c133bb519 (image=quay.io/ceph/ceph:v20, name=wizardly_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True)
Jan 31 04:58:09 np0005603787 podman[81762]: 2026-01-31 09:58:09.580701634 +0000 UTC m=+0.068336566 container died db1ea025034bda8574ccf32e1677cd425f1f571639f6d3cb2fd9e2e07f6698b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mgr-compute-0-eqrvct, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 04:58:09 np0005603787 systemd[1]: var-lib-containers-storage-overlay-a1243da6a2926922c52731e4794dc81bc04a419d53794fa8eaed3a4c3f1efd33-merged.mount: Deactivated successfully.
Jan 31 04:58:09 np0005603787 podman[81762]: 2026-01-31 09:58:09.626419525 +0000 UTC m=+0.114054467 container remove db1ea025034bda8574ccf32e1677cd425f1f571639f6d3cb2fd9e2e07f6698b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mgr-compute-0-eqrvct, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:58:09 np0005603787 bash[81762]: ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mgr-compute-0-eqrvct
Jan 31 04:58:09 np0005603787 systemd[1]: ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61@mgr.compute-0.eqrvct.service: Main process exited, code=exited, status=143/n/a
Jan 31 04:58:09 np0005603787 systemd[1]: ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61@mgr.compute-0.eqrvct.service: Failed with result 'exit-code'.
Jan 31 04:58:09 np0005603787 systemd[1]: Stopped Ceph mgr.compute-0.eqrvct for 962d77ae-dc67-5de8-89d8-3d1670c67b61.
Jan 31 04:58:09 np0005603787 systemd[1]: ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61@mgr.compute-0.eqrvct.service: Consumed 5.998s CPU time, 388.5M memory peak, read 0B from disk, written 147.5K to disk.
Jan 31 04:58:09 np0005603787 systemd[1]: Reloading.
Jan 31 04:58:09 np0005603787 ceph-mgr[75453]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 04:58:09 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:58:09 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 04:58:10 np0005603787 ceph-mgr[75453]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.eqrvct
Jan 31 04:58:10 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.eqrvct
Jan 31 04:58:10 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 31 04:58:10 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3869755435' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 31 04:58:10 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.eqrvct"} v 0)
Jan 31 04:58:10 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.eqrvct"} : dispatch
Jan 31 04:58:10 np0005603787 wizardly_albattani[81773]: 
Jan 31 04:58:10 np0005603787 wizardly_albattani[81773]: {"fsid":"962d77ae-dc67-5de8-89d8-3d1670c67b61","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":48,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-01-31T09:57:19:176309+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-01-31T09:57:19.182834+0000","services":{}},"progress_events":{}}
Jan 31 04:58:10 np0005603787 systemd[1]: libpod-ff1e01925d55d26c93732b9d058ff336defb7501df9a4a8d3567708c133bb519.scope: Deactivated successfully.
Jan 31 04:58:10 np0005603787 podman[81729]: 2026-01-31 09:58:10.192865962 +0000 UTC m=+0.754451281 container died ff1e01925d55d26c93732b9d058ff336defb7501df9a4a8d3567708c133bb519 (image=quay.io/ceph/ceph:v20, name=wizardly_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:58:10 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.eqrvct"}]': finished
Jan 31 04:58:10 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 31 04:58:10 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:10 np0005603787 ceph-mgr[75453]: [progress INFO root] complete: finished ev 09041982-d7b8-42d0-a298-01b81d588f47 (Updating mgr deployment (-1 -> 1))
Jan 31 04:58:10 np0005603787 ceph-mgr[75453]: [progress INFO root] Completed event 09041982-d7b8-42d0-a298-01b81d588f47 (Updating mgr deployment (-1 -> 1)) in 1 seconds
Jan 31 04:58:10 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 31 04:58:10 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:10 np0005603787 systemd[1]: var-lib-containers-storage-overlay-d296484beabeea088f0393e6f0056bad98bc7e8dc2e77a78581411d052873eae-merged.mount: Deactivated successfully.
Jan 31 04:58:10 np0005603787 podman[81729]: 2026-01-31 09:58:10.250364523 +0000 UTC m=+0.811949832 container remove ff1e01925d55d26c93732b9d058ff336defb7501df9a4a8d3567708c133bb519 (image=quay.io/ceph/ceph:v20, name=wizardly_albattani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 04:58:10 np0005603787 systemd[1]: libpod-conmon-ff1e01925d55d26c93732b9d058ff336defb7501df9a4a8d3567708c133bb519.scope: Deactivated successfully.
Jan 31 04:58:10 np0005603787 ceph-mon[75160]: Added host compute-0
Jan 31 04:58:10 np0005603787 ceph-mon[75160]: Saving service mon spec with placement compute-0
Jan 31 04:58:10 np0005603787 ceph-mon[75160]: Saving service mgr spec with placement compute-0
Jan 31 04:58:10 np0005603787 ceph-mon[75160]: Marking host: compute-0 for OSDSpec preview refresh.
Jan 31 04:58:10 np0005603787 ceph-mon[75160]: Saving service osd.default_drive_group spec with placement compute-0
Jan 31 04:58:10 np0005603787 ceph-mon[75160]: Removing daemon mgr.compute-0.eqrvct from compute-0 -- ports [8765]
Jan 31 04:58:10 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.eqrvct"} : dispatch
Jan 31 04:58:10 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.eqrvct"}]': finished
Jan 31 04:58:10 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:10 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:10 np0005603787 podman[82018]: 2026-01-31 09:58:10.793259539 +0000 UTC m=+0.112102694 container exec 1cb6a2ad0c52f65a03512fc45c5f9abf84541c639633c47899a99e7122aa7891 (image=quay.io/ceph/ceph:v20, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3)
Jan 31 04:58:10 np0005603787 podman[82018]: 2026-01-31 09:58:10.927411931 +0000 UTC m=+0.246255046 container exec_died 1cb6a2ad0c52f65a03512fc45c5f9abf84541c639633c47899a99e7122aa7891 (image=quay.io/ceph/ceph:v20, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mon-compute-0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:58:11 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 04:58:11 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 04:58:11 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 04:58:11 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:11 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 04:58:11 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:11 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 04:58:11 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:11 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 04:58:11 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:11 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 04:58:11 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 04:58:11 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 04:58:11 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 04:58:11 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 04:58:11 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:11 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 04:58:11 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 04:58:11 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 04:58:11 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 04:58:11 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 04:58:11 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 04:58:11 np0005603787 ceph-mon[75160]: Removing key for mgr.compute-0.eqrvct
Jan 31 04:58:11 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:11 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:11 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:11 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:11 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 04:58:11 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:11 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 04:58:11 np0005603787 podman[82177]: 2026-01-31 09:58:11.644706791 +0000 UTC m=+0.036494771 container create ab588bd8f2f94479f9f7fca8c5b9e8826c27e87f459f963573857cf7290c41ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:58:11 np0005603787 systemd[1]: Started libpod-conmon-ab588bd8f2f94479f9f7fca8c5b9e8826c27e87f459f963573857cf7290c41ba.scope.
Jan 31 04:58:11 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:11 np0005603787 podman[82177]: 2026-01-31 09:58:11.698882372 +0000 UTC m=+0.090670382 container init ab588bd8f2f94479f9f7fca8c5b9e8826c27e87f459f963573857cf7290c41ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_carson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 04:58:11 np0005603787 podman[82177]: 2026-01-31 09:58:11.704836424 +0000 UTC m=+0.096624404 container start ab588bd8f2f94479f9f7fca8c5b9e8826c27e87f459f963573857cf7290c41ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_carson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:58:11 np0005603787 agitated_carson[82193]: 167 167
Jan 31 04:58:11 np0005603787 systemd[1]: libpod-ab588bd8f2f94479f9f7fca8c5b9e8826c27e87f459f963573857cf7290c41ba.scope: Deactivated successfully.
Jan 31 04:58:11 np0005603787 podman[82177]: 2026-01-31 09:58:11.709145551 +0000 UTC m=+0.100933531 container attach ab588bd8f2f94479f9f7fca8c5b9e8826c27e87f459f963573857cf7290c41ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_carson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 04:58:11 np0005603787 podman[82177]: 2026-01-31 09:58:11.711018442 +0000 UTC m=+0.102806432 container died ab588bd8f2f94479f9f7fca8c5b9e8826c27e87f459f963573857cf7290c41ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_carson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:58:11 np0005603787 podman[82177]: 2026-01-31 09:58:11.626861438 +0000 UTC m=+0.018649448 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:58:11 np0005603787 systemd[1]: var-lib-containers-storage-overlay-922ca1605ae862d44d2af18a0a78aa3abaf6232e3d8747eecfd71fd69d8894a8-merged.mount: Deactivated successfully.
Jan 31 04:58:11 np0005603787 podman[82177]: 2026-01-31 09:58:11.743337219 +0000 UTC m=+0.135125199 container remove ab588bd8f2f94479f9f7fca8c5b9e8826c27e87f459f963573857cf7290c41ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_carson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 04:58:11 np0005603787 systemd[1]: libpod-conmon-ab588bd8f2f94479f9f7fca8c5b9e8826c27e87f459f963573857cf7290c41ba.scope: Deactivated successfully.
Jan 31 04:58:11 np0005603787 ceph-mgr[75453]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 04:58:11 np0005603787 podman[82216]: 2026-01-31 09:58:11.846240852 +0000 UTC m=+0.035693329 container create fa083db14d5f0035e1e417f15f2dda8cb27602d4b69f2be568c5ecff0e6756f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_bouman, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 04:58:11 np0005603787 systemd[1]: Started libpod-conmon-fa083db14d5f0035e1e417f15f2dda8cb27602d4b69f2be568c5ecff0e6756f6.scope.
Jan 31 04:58:11 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:11 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1e55c238a5ff995993e9eca25785115009df15cffae0529cdbb6e0df605d65c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:11 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1e55c238a5ff995993e9eca25785115009df15cffae0529cdbb6e0df605d65c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:11 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1e55c238a5ff995993e9eca25785115009df15cffae0529cdbb6e0df605d65c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:11 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1e55c238a5ff995993e9eca25785115009df15cffae0529cdbb6e0df605d65c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:11 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1e55c238a5ff995993e9eca25785115009df15cffae0529cdbb6e0df605d65c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:11 np0005603787 podman[82216]: 2026-01-31 09:58:11.902811938 +0000 UTC m=+0.092264435 container init fa083db14d5f0035e1e417f15f2dda8cb27602d4b69f2be568c5ecff0e6756f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_bouman, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 04:58:11 np0005603787 podman[82216]: 2026-01-31 09:58:11.90838064 +0000 UTC m=+0.097833117 container start fa083db14d5f0035e1e417f15f2dda8cb27602d4b69f2be568c5ecff0e6756f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:58:11 np0005603787 podman[82216]: 2026-01-31 09:58:11.912409419 +0000 UTC m=+0.101861926 container attach fa083db14d5f0035e1e417f15f2dda8cb27602d4b69f2be568c5ecff0e6756f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_bouman, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:58:11 np0005603787 podman[82216]: 2026-01-31 09:58:11.830475725 +0000 UTC m=+0.019928222 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:58:12 np0005603787 loving_bouman[82232]: --> passed data devices: 0 physical, 3 LVM
Jan 31 04:58:12 np0005603787 loving_bouman[82232]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 04:58:12 np0005603787 loving_bouman[82232]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 04:58:12 np0005603787 loving_bouman[82232]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 4a39e342-98b4-4260-a68a-c160a0fcb60c
Jan 31 04:58:12 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "4a39e342-98b4-4260-a68a-c160a0fcb60c"} v 0)
Jan 31 04:58:12 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4232945784' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "4a39e342-98b4-4260-a68a-c160a0fcb60c"} : dispatch
Jan 31 04:58:12 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Jan 31 04:58:12 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 04:58:12 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4232945784' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4a39e342-98b4-4260-a68a-c160a0fcb60c"}]': finished
Jan 31 04:58:12 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Jan 31 04:58:12 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Jan 31 04:58:12 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 04:58:12 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 04:58:12 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 04:58:13 np0005603787 loving_bouman[82232]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Jan 31 04:58:13 np0005603787 loving_bouman[82232]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Jan 31 04:58:13 np0005603787 lvm[82324]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 04:58:13 np0005603787 lvm[82324]: VG ceph_vg0 finished
Jan 31 04:58:13 np0005603787 loving_bouman[82232]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 04:58:13 np0005603787 loving_bouman[82232]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 31 04:58:13 np0005603787 loving_bouman[82232]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Jan 31 04:58:13 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 04:58:13 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/4232945784' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "4a39e342-98b4-4260-a68a-c160a0fcb60c"} : dispatch
Jan 31 04:58:13 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/4232945784' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4a39e342-98b4-4260-a68a-c160a0fcb60c"}]': finished
Jan 31 04:58:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 31 04:58:13 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/981874236' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 31 04:58:13 np0005603787 loving_bouman[82232]: stderr: got monmap epoch 1
Jan 31 04:58:13 np0005603787 loving_bouman[82232]: --> Creating keyring file for osd.0
Jan 31 04:58:13 np0005603787 loving_bouman[82232]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Jan 31 04:58:13 np0005603787 loving_bouman[82232]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Jan 31 04:58:13 np0005603787 loving_bouman[82232]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 4a39e342-98b4-4260-a68a-c160a0fcb60c --setuser ceph --setgroup ceph
Jan 31 04:58:13 np0005603787 ceph-mgr[75453]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 04:58:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:58:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:58:13 np0005603787 ceph-mgr[75453]: [progress INFO root] Writing back 3 completed events
Jan 31 04:58:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 31 04:58:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:58:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:58:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:58:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:58:13 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:13 np0005603787 ceph-mon[75160]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 31 04:58:13 np0005603787 ceph-mon[75160]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 31 04:58:14 np0005603787 loving_bouman[82232]: stderr: 2026-01-31T09:58:13.609+0000 7f02cc2ad8c0 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Jan 31 04:58:14 np0005603787 loving_bouman[82232]: stderr: 2026-01-31T09:58:13.631+0000 7f02cc2ad8c0 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Jan 31 04:58:14 np0005603787 loving_bouman[82232]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Jan 31 04:58:14 np0005603787 loving_bouman[82232]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 04:58:14 np0005603787 loving_bouman[82232]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 31 04:58:14 np0005603787 loving_bouman[82232]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 31 04:58:14 np0005603787 loving_bouman[82232]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 31 04:58:14 np0005603787 loving_bouman[82232]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 04:58:14 np0005603787 loving_bouman[82232]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 04:58:14 np0005603787 loving_bouman[82232]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 31 04:58:14 np0005603787 loving_bouman[82232]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Jan 31 04:58:14 np0005603787 loving_bouman[82232]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 04:58:14 np0005603787 loving_bouman[82232]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 04:58:14 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:14 np0005603787 ceph-mon[75160]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 31 04:58:14 np0005603787 ceph-mon[75160]: Cluster is now healthy
Jan 31 04:58:14 np0005603787 loving_bouman[82232]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 6af7a565-fb2b-4a54-af6d-dd6e6079328b
Jan 31 04:58:14 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b"} v 0)
Jan 31 04:58:14 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2652329441' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b"} : dispatch
Jan 31 04:58:14 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Jan 31 04:58:14 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 04:58:14 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2652329441' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b"}]': finished
Jan 31 04:58:14 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Jan 31 04:58:14 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Jan 31 04:58:14 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 04:58:14 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 04:58:14 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 04:58:14 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 04:58:14 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 04:58:14 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 04:58:15 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 04:58:15 np0005603787 lvm[83274]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 04:58:15 np0005603787 lvm[83274]: VG ceph_vg1 finished
Jan 31 04:58:15 np0005603787 loving_bouman[82232]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Jan 31 04:58:15 np0005603787 loving_bouman[82232]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Jan 31 04:58:15 np0005603787 loving_bouman[82232]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 31 04:58:15 np0005603787 loving_bouman[82232]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 31 04:58:15 np0005603787 loving_bouman[82232]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Jan 31 04:58:15 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/2652329441' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b"} : dispatch
Jan 31 04:58:15 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/2652329441' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b"}]': finished
Jan 31 04:58:15 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 31 04:58:15 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/327465892' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 31 04:58:15 np0005603787 loving_bouman[82232]: stderr: got monmap epoch 1
Jan 31 04:58:15 np0005603787 loving_bouman[82232]: --> Creating keyring file for osd.1
Jan 31 04:58:15 np0005603787 loving_bouman[82232]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Jan 31 04:58:15 np0005603787 loving_bouman[82232]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Jan 31 04:58:15 np0005603787 loving_bouman[82232]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 6af7a565-fb2b-4a54-af6d-dd6e6079328b --setuser ceph --setgroup ceph
Jan 31 04:58:15 np0005603787 ceph-mgr[75453]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 04:58:16 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 04:58:16 np0005603787 loving_bouman[82232]: stderr: 2026-01-31T09:58:15.695+0000 7f0bfc49c8c0 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Jan 31 04:58:16 np0005603787 loving_bouman[82232]: stderr: 2026-01-31T09:58:15.726+0000 7f0bfc49c8c0 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Jan 31 04:58:16 np0005603787 loving_bouman[82232]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Jan 31 04:58:16 np0005603787 loving_bouman[82232]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 31 04:58:16 np0005603787 loving_bouman[82232]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 31 04:58:16 np0005603787 loving_bouman[82232]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 31 04:58:16 np0005603787 loving_bouman[82232]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 31 04:58:16 np0005603787 loving_bouman[82232]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 31 04:58:16 np0005603787 loving_bouman[82232]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 31 04:58:16 np0005603787 loving_bouman[82232]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 31 04:58:16 np0005603787 loving_bouman[82232]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Jan 31 04:58:16 np0005603787 loving_bouman[82232]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 04:58:16 np0005603787 loving_bouman[82232]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 04:58:16 np0005603787 loving_bouman[82232]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 446dbac2-6402-4180-8661-54a9bd1028fb
Jan 31 04:58:17 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 04:58:17 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "446dbac2-6402-4180-8661-54a9bd1028fb"} v 0)
Jan 31 04:58:17 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2973682512' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "446dbac2-6402-4180-8661-54a9bd1028fb"} : dispatch
Jan 31 04:58:17 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Jan 31 04:58:17 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 04:58:17 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2973682512' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "446dbac2-6402-4180-8661-54a9bd1028fb"}]': finished
Jan 31 04:58:17 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Jan 31 04:58:17 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Jan 31 04:58:17 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 04:58:17 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 04:58:17 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 04:58:17 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 04:58:17 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 04:58:17 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 04:58:17 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 04:58:17 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 04:58:17 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 04:58:17 np0005603787 lvm[84219]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 04:58:17 np0005603787 lvm[84219]: VG ceph_vg2 finished
Jan 31 04:58:17 np0005603787 loving_bouman[82232]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Jan 31 04:58:17 np0005603787 loving_bouman[82232]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Jan 31 04:58:17 np0005603787 loving_bouman[82232]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 31 04:58:17 np0005603787 loving_bouman[82232]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 31 04:58:17 np0005603787 loving_bouman[82232]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Jan 31 04:58:17 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/2973682512' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "446dbac2-6402-4180-8661-54a9bd1028fb"} : dispatch
Jan 31 04:58:17 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/2973682512' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "446dbac2-6402-4180-8661-54a9bd1028fb"}]': finished
Jan 31 04:58:17 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 31 04:58:17 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4235288899' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 31 04:58:17 np0005603787 loving_bouman[82232]: stderr: got monmap epoch 1
Jan 31 04:58:17 np0005603787 loving_bouman[82232]: --> Creating keyring file for osd.2
Jan 31 04:58:17 np0005603787 loving_bouman[82232]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Jan 31 04:58:17 np0005603787 loving_bouman[82232]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Jan 31 04:58:17 np0005603787 loving_bouman[82232]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 446dbac2-6402-4180-8661-54a9bd1028fb --setuser ceph --setgroup ceph
Jan 31 04:58:17 np0005603787 ceph-mgr[75453]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 04:58:18 np0005603787 loving_bouman[82232]: stderr: 2026-01-31T09:58:17.823+0000 7fc80d7428c0 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) No valid bdev label found
Jan 31 04:58:18 np0005603787 loving_bouman[82232]: stderr: 2026-01-31T09:58:17.854+0000 7fc80d7428c0 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Jan 31 04:58:18 np0005603787 loving_bouman[82232]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Jan 31 04:58:18 np0005603787 loving_bouman[82232]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 31 04:58:18 np0005603787 loving_bouman[82232]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 31 04:58:18 np0005603787 loving_bouman[82232]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 31 04:58:18 np0005603787 loving_bouman[82232]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 31 04:58:18 np0005603787 loving_bouman[82232]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 31 04:58:18 np0005603787 loving_bouman[82232]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 31 04:58:18 np0005603787 loving_bouman[82232]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 31 04:58:18 np0005603787 loving_bouman[82232]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Jan 31 04:58:18 np0005603787 systemd[1]: libpod-fa083db14d5f0035e1e417f15f2dda8cb27602d4b69f2be568c5ecff0e6756f6.scope: Deactivated successfully.
Jan 31 04:58:18 np0005603787 systemd[1]: libpod-fa083db14d5f0035e1e417f15f2dda8cb27602d4b69f2be568c5ecff0e6756f6.scope: Consumed 5.126s CPU time.
Jan 31 04:58:18 np0005603787 podman[85132]: 2026-01-31 09:58:18.890717026 +0000 UTC m=+0.027738994 container died fa083db14d5f0035e1e417f15f2dda8cb27602d4b69f2be568c5ecff0e6756f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_bouman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True)
Jan 31 04:58:18 np0005603787 systemd[1]: var-lib-containers-storage-overlay-b1e55c238a5ff995993e9eca25785115009df15cffae0529cdbb6e0df605d65c-merged.mount: Deactivated successfully.
Jan 31 04:58:18 np0005603787 podman[85132]: 2026-01-31 09:58:18.97560201 +0000 UTC m=+0.112623948 container remove fa083db14d5f0035e1e417f15f2dda8cb27602d4b69f2be568c5ecff0e6756f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_bouman, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 04:58:18 np0005603787 systemd[1]: libpod-conmon-fa083db14d5f0035e1e417f15f2dda8cb27602d4b69f2be568c5ecff0e6756f6.scope: Deactivated successfully.
Jan 31 04:58:19 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 04:58:19 np0005603787 podman[85209]: 2026-01-31 09:58:19.388304563 +0000 UTC m=+0.031847516 container create 77ebfffaa943e56249955fc7f64ee17663c0494b0dcfbc71ca6d7ec9de32ae96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_robinson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 04:58:19 np0005603787 systemd[1]: Started libpod-conmon-77ebfffaa943e56249955fc7f64ee17663c0494b0dcfbc71ca6d7ec9de32ae96.scope.
Jan 31 04:58:19 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:19 np0005603787 podman[85209]: 2026-01-31 09:58:19.456865384 +0000 UTC m=+0.100408357 container init 77ebfffaa943e56249955fc7f64ee17663c0494b0dcfbc71ca6d7ec9de32ae96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:58:19 np0005603787 podman[85209]: 2026-01-31 09:58:19.463395542 +0000 UTC m=+0.106938505 container start 77ebfffaa943e56249955fc7f64ee17663c0494b0dcfbc71ca6d7ec9de32ae96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_robinson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 04:58:19 np0005603787 vigorous_robinson[85225]: 167 167
Jan 31 04:58:19 np0005603787 podman[85209]: 2026-01-31 09:58:19.468054298 +0000 UTC m=+0.111597251 container attach 77ebfffaa943e56249955fc7f64ee17663c0494b0dcfbc71ca6d7ec9de32ae96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:58:19 np0005603787 systemd[1]: libpod-77ebfffaa943e56249955fc7f64ee17663c0494b0dcfbc71ca6d7ec9de32ae96.scope: Deactivated successfully.
Jan 31 04:58:19 np0005603787 podman[85209]: 2026-01-31 09:58:19.468692024 +0000 UTC m=+0.112234997 container died 77ebfffaa943e56249955fc7f64ee17663c0494b0dcfbc71ca6d7ec9de32ae96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_robinson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 04:58:19 np0005603787 podman[85209]: 2026-01-31 09:58:19.37383305 +0000 UTC m=+0.017376013 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:58:19 np0005603787 systemd[1]: var-lib-containers-storage-overlay-83b46fe7f013c007b2d2b57a0f373e9277e86a0c070a869ac523362164d4fc1b-merged.mount: Deactivated successfully.
Jan 31 04:58:19 np0005603787 podman[85209]: 2026-01-31 09:58:19.514736274 +0000 UTC m=+0.158279217 container remove 77ebfffaa943e56249955fc7f64ee17663c0494b0dcfbc71ca6d7ec9de32ae96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_robinson, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:58:19 np0005603787 systemd[1]: libpod-conmon-77ebfffaa943e56249955fc7f64ee17663c0494b0dcfbc71ca6d7ec9de32ae96.scope: Deactivated successfully.
Jan 31 04:58:19 np0005603787 podman[85247]: 2026-01-31 09:58:19.639323166 +0000 UTC m=+0.038591108 container create e4683abe3e9dbf9fb88c3c15069f0236a5fbfb85bee6f51e0c7ff1461752846d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_rosalind, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:58:19 np0005603787 systemd[1]: Started libpod-conmon-e4683abe3e9dbf9fb88c3c15069f0236a5fbfb85bee6f51e0c7ff1461752846d.scope.
Jan 31 04:58:19 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:19 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d4efc7aabcf188e15e41346d1acbadb78d624363bbf7a52c48de4308ff7bd94/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:19 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d4efc7aabcf188e15e41346d1acbadb78d624363bbf7a52c48de4308ff7bd94/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:19 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d4efc7aabcf188e15e41346d1acbadb78d624363bbf7a52c48de4308ff7bd94/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:19 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d4efc7aabcf188e15e41346d1acbadb78d624363bbf7a52c48de4308ff7bd94/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:19 np0005603787 podman[85247]: 2026-01-31 09:58:19.702479511 +0000 UTC m=+0.101747453 container init e4683abe3e9dbf9fb88c3c15069f0236a5fbfb85bee6f51e0c7ff1461752846d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_rosalind, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 04:58:19 np0005603787 podman[85247]: 2026-01-31 09:58:19.708418713 +0000 UTC m=+0.107686655 container start e4683abe3e9dbf9fb88c3c15069f0236a5fbfb85bee6f51e0c7ff1461752846d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_rosalind, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:58:19 np0005603787 podman[85247]: 2026-01-31 09:58:19.712087852 +0000 UTC m=+0.111355804 container attach e4683abe3e9dbf9fb88c3c15069f0236a5fbfb85bee6f51e0c7ff1461752846d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_rosalind, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:58:19 np0005603787 podman[85247]: 2026-01-31 09:58:19.622069178 +0000 UTC m=+0.021337150 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:58:19 np0005603787 ceph-mgr[75453]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]: {
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:    "0": [
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:        {
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:            "devices": [
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "/dev/loop3"
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:            ],
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:            "lv_name": "ceph_lv0",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:            "lv_size": "21470642176",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:            "name": "ceph_lv0",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:            "tags": {
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "ceph.cluster_name": "ceph",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "ceph.crush_device_class": "",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "ceph.encrypted": "0",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "ceph.objectstore": "bluestore",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "ceph.osd_id": "0",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "ceph.type": "block",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "ceph.vdo": "0",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "ceph.with_tpm": "0"
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:            },
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:            "type": "block",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:            "vg_name": "ceph_vg0"
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:        }
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:    ],
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:    "1": [
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:        {
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:            "devices": [
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "/dev/loop4"
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:            ],
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:            "lv_name": "ceph_lv1",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:            "lv_size": "21470642176",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:            "name": "ceph_lv1",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:            "tags": {
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "ceph.cluster_name": "ceph",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "ceph.crush_device_class": "",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "ceph.encrypted": "0",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "ceph.objectstore": "bluestore",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "ceph.osd_id": "1",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "ceph.type": "block",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "ceph.vdo": "0",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "ceph.with_tpm": "0"
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:            },
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:            "type": "block",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:            "vg_name": "ceph_vg1"
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:        }
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:    ],
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:    "2": [
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:        {
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:            "devices": [
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "/dev/loop5"
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:            ],
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:            "lv_name": "ceph_lv2",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:            "lv_size": "21470642176",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:            "name": "ceph_lv2",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:            "tags": {
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "ceph.cluster_name": "ceph",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "ceph.crush_device_class": "",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "ceph.encrypted": "0",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "ceph.objectstore": "bluestore",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "ceph.osd_id": "2",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "ceph.type": "block",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "ceph.vdo": "0",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:                "ceph.with_tpm": "0"
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:            },
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:            "type": "block",
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:            "vg_name": "ceph_vg2"
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:        }
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]:    ]
Jan 31 04:58:19 np0005603787 reverent_rosalind[85263]: }
Jan 31 04:58:19 np0005603787 systemd[1]: libpod-e4683abe3e9dbf9fb88c3c15069f0236a5fbfb85bee6f51e0c7ff1461752846d.scope: Deactivated successfully.
Jan 31 04:58:19 np0005603787 podman[85247]: 2026-01-31 09:58:19.97467334 +0000 UTC m=+0.373941292 container died e4683abe3e9dbf9fb88c3c15069f0236a5fbfb85bee6f51e0c7ff1461752846d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_rosalind, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:58:20 np0005603787 systemd[1]: var-lib-containers-storage-overlay-7d4efc7aabcf188e15e41346d1acbadb78d624363bbf7a52c48de4308ff7bd94-merged.mount: Deactivated successfully.
Jan 31 04:58:20 np0005603787 podman[85247]: 2026-01-31 09:58:20.021725147 +0000 UTC m=+0.420993089 container remove e4683abe3e9dbf9fb88c3c15069f0236a5fbfb85bee6f51e0c7ff1461752846d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_rosalind, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 04:58:20 np0005603787 systemd[1]: libpod-conmon-e4683abe3e9dbf9fb88c3c15069f0236a5fbfb85bee6f51e0c7ff1461752846d.scope: Deactivated successfully.
Jan 31 04:58:20 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Jan 31 04:58:20 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Jan 31 04:58:20 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 04:58:20 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 04:58:20 np0005603787 ceph-mgr[75453]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Jan 31 04:58:20 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Jan 31 04:58:20 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Jan 31 04:58:20 np0005603787 podman[85374]: 2026-01-31 09:58:20.554437738 +0000 UTC m=+0.038557088 container create 133d869a2472f7b90d53df3533cea6e8d4fb302f4fddc0a1957d022388d7c5db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_heyrovsky, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:58:20 np0005603787 systemd[1]: Started libpod-conmon-133d869a2472f7b90d53df3533cea6e8d4fb302f4fddc0a1957d022388d7c5db.scope.
Jan 31 04:58:20 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:20 np0005603787 podman[85374]: 2026-01-31 09:58:20.616289736 +0000 UTC m=+0.100409137 container init 133d869a2472f7b90d53df3533cea6e8d4fb302f4fddc0a1957d022388d7c5db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_heyrovsky, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:58:20 np0005603787 podman[85374]: 2026-01-31 09:58:20.621911799 +0000 UTC m=+0.106031149 container start 133d869a2472f7b90d53df3533cea6e8d4fb302f4fddc0a1957d022388d7c5db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:58:20 np0005603787 jolly_heyrovsky[85390]: 167 167
Jan 31 04:58:20 np0005603787 systemd[1]: libpod-133d869a2472f7b90d53df3533cea6e8d4fb302f4fddc0a1957d022388d7c5db.scope: Deactivated successfully.
Jan 31 04:58:20 np0005603787 conmon[85390]: conmon 133d869a2472f7b90d53 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-133d869a2472f7b90d53df3533cea6e8d4fb302f4fddc0a1957d022388d7c5db.scope/container/memory.events
Jan 31 04:58:20 np0005603787 podman[85374]: 2026-01-31 09:58:20.626560695 +0000 UTC m=+0.110680145 container attach 133d869a2472f7b90d53df3533cea6e8d4fb302f4fddc0a1957d022388d7c5db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_heyrovsky, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:58:20 np0005603787 podman[85374]: 2026-01-31 09:58:20.626904205 +0000 UTC m=+0.111023555 container died 133d869a2472f7b90d53df3533cea6e8d4fb302f4fddc0a1957d022388d7c5db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:58:20 np0005603787 podman[85374]: 2026-01-31 09:58:20.536666755 +0000 UTC m=+0.020786125 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:58:20 np0005603787 systemd[1]: var-lib-containers-storage-overlay-f7fae0eabee146c76bded58d86e40312b82f9a39c7869e318dddd4d6ede755a6-merged.mount: Deactivated successfully.
Jan 31 04:58:20 np0005603787 podman[85374]: 2026-01-31 09:58:20.661464123 +0000 UTC m=+0.145583473 container remove 133d869a2472f7b90d53df3533cea6e8d4fb302f4fddc0a1957d022388d7c5db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_heyrovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:58:20 np0005603787 systemd[1]: libpod-conmon-133d869a2472f7b90d53df3533cea6e8d4fb302f4fddc0a1957d022388d7c5db.scope: Deactivated successfully.
Jan 31 04:58:20 np0005603787 podman[85419]: 2026-01-31 09:58:20.863750284 +0000 UTC m=+0.042684630 container create d45b25b8e06ea1f3b252048f3e72ecb0d21ee5ac36b32c8bb9231b1531e2fc71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-0-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 04:58:20 np0005603787 systemd[1]: Started libpod-conmon-d45b25b8e06ea1f3b252048f3e72ecb0d21ee5ac36b32c8bb9231b1531e2fc71.scope.
Jan 31 04:58:20 np0005603787 podman[85419]: 2026-01-31 09:58:20.839098575 +0000 UTC m=+0.018032941 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:58:20 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:20 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8772dffe8049f1e2aef967703767910adce3555cb0f6c6e9a5d4432e53ea1638/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:20 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8772dffe8049f1e2aef967703767910adce3555cb0f6c6e9a5d4432e53ea1638/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:20 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8772dffe8049f1e2aef967703767910adce3555cb0f6c6e9a5d4432e53ea1638/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:20 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8772dffe8049f1e2aef967703767910adce3555cb0f6c6e9a5d4432e53ea1638/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:20 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8772dffe8049f1e2aef967703767910adce3555cb0f6c6e9a5d4432e53ea1638/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:20 np0005603787 podman[85419]: 2026-01-31 09:58:20.983281818 +0000 UTC m=+0.162216164 container init d45b25b8e06ea1f3b252048f3e72ecb0d21ee5ac36b32c8bb9231b1531e2fc71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-0-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 04:58:20 np0005603787 podman[85419]: 2026-01-31 09:58:20.989725864 +0000 UTC m=+0.168660230 container start d45b25b8e06ea1f3b252048f3e72ecb0d21ee5ac36b32c8bb9231b1531e2fc71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-0-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 04:58:20 np0005603787 podman[85419]: 2026-01-31 09:58:20.998532122 +0000 UTC m=+0.177466468 container attach d45b25b8e06ea1f3b252048f3e72ecb0d21ee5ac36b32c8bb9231b1531e2fc71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-0-activate-test, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 31 04:58:21 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 04:58:21 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-0-activate-test[85436]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 31 04:58:21 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-0-activate-test[85436]:                            [--no-systemd] [--no-tmpfs]
Jan 31 04:58:21 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-0-activate-test[85436]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 31 04:58:21 np0005603787 systemd[1]: libpod-d45b25b8e06ea1f3b252048f3e72ecb0d21ee5ac36b32c8bb9231b1531e2fc71.scope: Deactivated successfully.
Jan 31 04:58:21 np0005603787 podman[85419]: 2026-01-31 09:58:21.18446695 +0000 UTC m=+0.363401306 container died d45b25b8e06ea1f3b252048f3e72ecb0d21ee5ac36b32c8bb9231b1531e2fc71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-0-activate-test, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 04:58:21 np0005603787 systemd[1]: var-lib-containers-storage-overlay-8772dffe8049f1e2aef967703767910adce3555cb0f6c6e9a5d4432e53ea1638-merged.mount: Deactivated successfully.
Jan 31 04:58:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 04:58:21 np0005603787 podman[85419]: 2026-01-31 09:58:21.243382279 +0000 UTC m=+0.422316625 container remove d45b25b8e06ea1f3b252048f3e72ecb0d21ee5ac36b32c8bb9231b1531e2fc71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-0-activate-test, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 04:58:21 np0005603787 systemd[1]: libpod-conmon-d45b25b8e06ea1f3b252048f3e72ecb0d21ee5ac36b32c8bb9231b1531e2fc71.scope: Deactivated successfully.
Jan 31 04:58:21 np0005603787 systemd[1]: Reloading.
Jan 31 04:58:21 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:58:21 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 04:58:21 np0005603787 ceph-mon[75160]: Deploying daemon osd.0 on compute-0
Jan 31 04:58:21 np0005603787 systemd[1]: Reloading.
Jan 31 04:58:21 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:58:21 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 04:58:21 np0005603787 ceph-mgr[75453]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 04:58:21 np0005603787 systemd[1]: Starting Ceph osd.0 for 962d77ae-dc67-5de8-89d8-3d1670c67b61...
Jan 31 04:58:22 np0005603787 podman[85596]: 2026-01-31 09:58:22.119851101 +0000 UTC m=+0.048923789 container create d0e3e1fe23d9166cb09ed3517391d0262b44a4cf51ab53a9793b3404f7f57821 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-0-activate, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 04:58:22 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:22 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb832d930e6f4cfe968477bf709b8b7593a1f1aad9fc6a98ac132c6dc1f94e78/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:22 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb832d930e6f4cfe968477bf709b8b7593a1f1aad9fc6a98ac132c6dc1f94e78/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:22 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb832d930e6f4cfe968477bf709b8b7593a1f1aad9fc6a98ac132c6dc1f94e78/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:22 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb832d930e6f4cfe968477bf709b8b7593a1f1aad9fc6a98ac132c6dc1f94e78/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:22 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb832d930e6f4cfe968477bf709b8b7593a1f1aad9fc6a98ac132c6dc1f94e78/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:22 np0005603787 podman[85596]: 2026-01-31 09:58:22.089825205 +0000 UTC m=+0.018897983 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:58:22 np0005603787 podman[85596]: 2026-01-31 09:58:22.194932019 +0000 UTC m=+0.124004737 container init d0e3e1fe23d9166cb09ed3517391d0262b44a4cf51ab53a9793b3404f7f57821 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:58:22 np0005603787 podman[85596]: 2026-01-31 09:58:22.200177201 +0000 UTC m=+0.129249889 container start d0e3e1fe23d9166cb09ed3517391d0262b44a4cf51ab53a9793b3404f7f57821 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-0-activate, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:58:22 np0005603787 podman[85596]: 2026-01-31 09:58:22.205530646 +0000 UTC m=+0.134603334 container attach d0e3e1fe23d9166cb09ed3517391d0262b44a4cf51ab53a9793b3404f7f57821 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-0-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 04:58:22 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-0-activate[85611]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 04:58:22 np0005603787 bash[85596]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 04:58:22 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-0-activate[85611]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 04:58:22 np0005603787 bash[85596]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 04:58:22 np0005603787 lvm[85699]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 04:58:22 np0005603787 lvm[85697]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 04:58:22 np0005603787 lvm[85696]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 04:58:22 np0005603787 lvm[85699]: VG ceph_vg2 finished
Jan 31 04:58:22 np0005603787 lvm[85696]: VG ceph_vg0 finished
Jan 31 04:58:22 np0005603787 lvm[85697]: VG ceph_vg1 finished
Jan 31 04:58:22 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-0-activate[85611]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 31 04:58:22 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-0-activate[85611]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 04:58:22 np0005603787 bash[85596]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 31 04:58:22 np0005603787 bash[85596]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 04:58:22 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-0-activate[85611]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 04:58:22 np0005603787 bash[85596]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 04:58:23 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 04:58:23 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-0-activate[85611]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 04:58:23 np0005603787 bash[85596]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 04:58:23 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-0-activate[85611]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 31 04:58:23 np0005603787 bash[85596]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 31 04:58:23 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-0-activate[85611]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 31 04:58:23 np0005603787 bash[85596]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 31 04:58:23 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-0-activate[85611]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 31 04:58:23 np0005603787 bash[85596]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 31 04:58:23 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-0-activate[85611]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 04:58:23 np0005603787 bash[85596]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 04:58:23 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-0-activate[85611]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 04:58:23 np0005603787 bash[85596]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 04:58:23 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-0-activate[85611]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 31 04:58:23 np0005603787 bash[85596]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 31 04:58:23 np0005603787 systemd[1]: libpod-d0e3e1fe23d9166cb09ed3517391d0262b44a4cf51ab53a9793b3404f7f57821.scope: Deactivated successfully.
Jan 31 04:58:23 np0005603787 systemd[1]: libpod-d0e3e1fe23d9166cb09ed3517391d0262b44a4cf51ab53a9793b3404f7f57821.scope: Consumed 1.219s CPU time.
Jan 31 04:58:23 np0005603787 podman[85801]: 2026-01-31 09:58:23.217535728 +0000 UTC m=+0.021460514 container died d0e3e1fe23d9166cb09ed3517391d0262b44a4cf51ab53a9793b3404f7f57821 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-0-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 04:58:23 np0005603787 systemd[1]: var-lib-containers-storage-overlay-fb832d930e6f4cfe968477bf709b8b7593a1f1aad9fc6a98ac132c6dc1f94e78-merged.mount: Deactivated successfully.
Jan 31 04:58:23 np0005603787 podman[85801]: 2026-01-31 09:58:23.275589063 +0000 UTC m=+0.079513799 container remove d0e3e1fe23d9166cb09ed3517391d0262b44a4cf51ab53a9793b3404f7f57821 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 04:58:23 np0005603787 podman[85860]: 2026-01-31 09:58:23.409892969 +0000 UTC m=+0.030173820 container create e5b4158e31f5e4d198c9d23c3480a091e570d943b7f55eda5ab0e93778df7ee0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:58:23 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc3f4cac61abbc9a69e864aa82ec4f515a8df3a46f6b27a49c044ba857d35425/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:23 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc3f4cac61abbc9a69e864aa82ec4f515a8df3a46f6b27a49c044ba857d35425/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:23 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc3f4cac61abbc9a69e864aa82ec4f515a8df3a46f6b27a49c044ba857d35425/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:23 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc3f4cac61abbc9a69e864aa82ec4f515a8df3a46f6b27a49c044ba857d35425/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:23 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc3f4cac61abbc9a69e864aa82ec4f515a8df3a46f6b27a49c044ba857d35425/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:23 np0005603787 podman[85860]: 2026-01-31 09:58:23.467192014 +0000 UTC m=+0.087472875 container init e5b4158e31f5e4d198c9d23c3480a091e570d943b7f55eda5ab0e93778df7ee0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:58:23 np0005603787 podman[85860]: 2026-01-31 09:58:23.474861503 +0000 UTC m=+0.095142354 container start e5b4158e31f5e4d198c9d23c3480a091e570d943b7f55eda5ab0e93778df7ee0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:58:23 np0005603787 bash[85860]: e5b4158e31f5e4d198c9d23c3480a091e570d943b7f55eda5ab0e93778df7ee0
Jan 31 04:58:23 np0005603787 podman[85860]: 2026-01-31 09:58:23.396898676 +0000 UTC m=+0.017179547 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:58:23 np0005603787 systemd[1]: Started Ceph osd.0 for 962d77ae-dc67-5de8-89d8-3d1670c67b61.
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: pidfile_write: ignore empty --pid-file
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfea000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfea000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfea000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfea000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfea000 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 04:58:23 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfea000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfea000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfea000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfea000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfea000 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 04:58:23 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:23 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 04:58:23 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:23 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Jan 31 04:58:23 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Jan 31 04:58:23 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 04:58:23 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 04:58:23 np0005603787 ceph-mgr[75453]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Jan 31 04:58:23 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfea000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfea000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfea000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfea000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfea000 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfea000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfea000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfea000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfea000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfea000 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfea000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfea000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfea000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfea000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfea000 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfea000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfea000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfea000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfea000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfea400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfea400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfea400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfea400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfea400 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfea000 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: load: jerasure load: lrc 
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfebc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfebc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfebc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfebc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfebc00 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfebc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfebc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfebc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfebc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfebc00 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfebc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfebc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfebc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfebc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfebc00 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 04:58:23 np0005603787 ceph-mgr[75453]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfebc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfebc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfebc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfebc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfebc00 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfebc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfebc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfebc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfebc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfebc00 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfebc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfebc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfebc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627cfebc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627dc81800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627dc81800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627dc81800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627dc81800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bluefs mount
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bluefs mount shared_bdev_used = 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: RocksDB version: 7.9.2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Git sha 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: DB SUMMARY
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: DB Session ID:  JTMBMMLQFZG4FQ42VRAA
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: CURRENT file:  CURRENT
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                         Options.error_if_exists: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                       Options.create_if_missing: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                                     Options.env: 0x55627ce7bea0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                                Options.info_log: 0x55627df028a0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                              Options.statistics: (nil)
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                               Options.use_fsync: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                              Options.db_log_dir: 
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                                 Options.wal_dir: db.wal
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.write_buffer_manager: 0x55627cedcb40
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.unordered_write: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                               Options.row_cache: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                              Options.wal_filter: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.two_write_queues: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.wal_compression: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.atomic_flush: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.max_background_jobs: 4
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.max_background_compactions: -1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.max_subcompactions: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.max_open_files: -1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Compression algorithms supported:
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: #011kZSTD supported: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: #011kXpressCompression supported: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: #011kBZip2Compression supported: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: #011kLZ4Compression supported: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: #011kZlibCompression supported: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: #011kSnappyCompression supported: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55627df02c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55627ce7f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55627df02c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55627ce7f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55627df02c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55627ce7f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55627df02c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55627ce7f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55627df02c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55627ce7f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55627df02c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55627ce7f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55627df02c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55627ce7f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55627df02c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55627ce7fa30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55627df02c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55627ce7fa30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55627df02c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55627ce7fa30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: e1b7212f-2a17-4b7c-893c-f85682df8808
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769853503892173, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769853503892774, "job": 1, "event": "recovery_finished"}
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: freelist init
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: freelist _read_cfg
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bluestore(/var/lib/ceph/osd/ceph-0) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bluefs umount
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627dc81800 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627dc81800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627dc81800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627dc81800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bdev(0x55627dc81800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bluefs mount
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bluefs mount shared_bdev_used = 27262976
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: RocksDB version: 7.9.2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Git sha 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: DB SUMMARY
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: DB Session ID:  JTMBMMLQFZG4FQ42VRAB
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: CURRENT file:  CURRENT
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                         Options.error_if_exists: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                       Options.create_if_missing: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                                     Options.env: 0x55627dcc7dc0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                                Options.info_log: 0x55627df5b660
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                              Options.statistics: (nil)
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                               Options.use_fsync: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                              Options.db_log_dir: 
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                                 Options.wal_dir: db.wal
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.write_buffer_manager: 0x55627cedd900
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.unordered_write: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                               Options.row_cache: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                              Options.wal_filter: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.two_write_queues: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.wal_compression: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.atomic_flush: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.max_background_jobs: 4
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.max_background_compactions: -1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.max_subcompactions: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.max_open_files: -1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Compression algorithms supported:
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: #011kZSTD supported: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: #011kXpressCompression supported: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: #011kBZip2Compression supported: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: #011kLZ4Compression supported: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: #011kZlibCompression supported: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: #011kSnappyCompression supported: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55627df03f00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55627ce7f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55627df03f00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55627ce7f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55627df03f00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55627ce7f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55627df03f00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55627ce7f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55627df03f00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55627ce7f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55627df03f00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55627ce7f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55627df03f00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55627ce7f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55627df03f20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55627ce7f4b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55627df03f20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55627ce7f4b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55627df03f20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55627ce7f4b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: e1b7212f-2a17-4b7c-893c-f85682df8808
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769853503960909, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769853503968728, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769853503, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1b7212f-2a17-4b7c-893c-f85682df8808", "db_session_id": "JTMBMMLQFZG4FQ42VRAB", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769853503973213, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769853503, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1b7212f-2a17-4b7c-893c-f85682df8808", "db_session_id": "JTMBMMLQFZG4FQ42VRAB", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769853503976341, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769853503, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1b7212f-2a17-4b7c-893c-f85682df8808", "db_session_id": "JTMBMMLQFZG4FQ42VRAB", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769853503977940, "job": 1, "event": "recovery_finished"}
Jan 31 04:58:23 np0005603787 ceph-osd[85879]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 31 04:58:23 np0005603787 podman[86212]: 2026-01-31 09:58:23.990190382 +0000 UTC m=+0.040800569 container create 91393058143c441fa191f4724b3efd0ec25c0883cd8a12f2dc3d64b21d497f58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_pike, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 04:58:24 np0005603787 ceph-osd[85879]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55627e0e6000
Jan 31 04:58:24 np0005603787 ceph-osd[85879]: rocksdb: DB pointer 0x55627e0bc000
Jan 31 04:58:24 np0005603787 ceph-osd[85879]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 31 04:58:24 np0005603787 ceph-osd[85879]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Jan 31 04:58:24 np0005603787 ceph-osd[85879]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Jan 31 04:58:24 np0005603787 ceph-osd[85879]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 04:58:24 np0005603787 ceph-osd[85879]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55627ce7f8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55627ce7f8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55627ce7f8d0#2 capacity: 460.80 MB usag
Jan 31 04:58:24 np0005603787 ceph-osd[85879]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 31 04:58:24 np0005603787 ceph-osd[85879]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 31 04:58:24 np0005603787 ceph-osd[85879]: _get_class not permitted to load lua
Jan 31 04:58:24 np0005603787 ceph-osd[85879]: _get_class not permitted to load sdk
Jan 31 04:58:24 np0005603787 ceph-osd[85879]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 31 04:58:24 np0005603787 ceph-osd[85879]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 31 04:58:24 np0005603787 ceph-osd[85879]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 31 04:58:24 np0005603787 ceph-osd[85879]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 31 04:58:24 np0005603787 ceph-osd[85879]: osd.0 0 load_pgs
Jan 31 04:58:24 np0005603787 ceph-osd[85879]: osd.0 0 load_pgs opened 0 pgs
Jan 31 04:58:24 np0005603787 ceph-osd[85879]: osd.0 0 log_to_monitors true
Jan 31 04:58:24 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-0[85875]: 2026-01-31T09:58:24.015+0000 7f8ebac2b8c0 -1 osd.0 0 log_to_monitors true
Jan 31 04:58:24 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Jan 31 04:58:24 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2238925525,v1:192.168.122.100:6803/2238925525]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Jan 31 04:58:24 np0005603787 systemd[1]: Started libpod-conmon-91393058143c441fa191f4724b3efd0ec25c0883cd8a12f2dc3d64b21d497f58.scope.
Jan 31 04:58:24 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:24 np0005603787 podman[86212]: 2026-01-31 09:58:24.062852604 +0000 UTC m=+0.113462791 container init 91393058143c441fa191f4724b3efd0ec25c0883cd8a12f2dc3d64b21d497f58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_pike, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 04:58:24 np0005603787 podman[86212]: 2026-01-31 09:58:23.969565281 +0000 UTC m=+0.020175478 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:58:24 np0005603787 podman[86212]: 2026-01-31 09:58:24.067629413 +0000 UTC m=+0.118239600 container start 91393058143c441fa191f4724b3efd0ec25c0883cd8a12f2dc3d64b21d497f58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_pike, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:58:24 np0005603787 podman[86212]: 2026-01-31 09:58:24.07264346 +0000 UTC m=+0.123253647 container attach 91393058143c441fa191f4724b3efd0ec25c0883cd8a12f2dc3d64b21d497f58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:58:24 np0005603787 vigilant_pike[86441]: 167 167
Jan 31 04:58:24 np0005603787 systemd[1]: libpod-91393058143c441fa191f4724b3efd0ec25c0883cd8a12f2dc3d64b21d497f58.scope: Deactivated successfully.
Jan 31 04:58:24 np0005603787 conmon[86441]: conmon 91393058143c441fa191 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-91393058143c441fa191f4724b3efd0ec25c0883cd8a12f2dc3d64b21d497f58.scope/container/memory.events
Jan 31 04:58:24 np0005603787 podman[86212]: 2026-01-31 09:58:24.076210546 +0000 UTC m=+0.126820723 container died 91393058143c441fa191f4724b3efd0ec25c0883cd8a12f2dc3d64b21d497f58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_pike, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 04:58:24 np0005603787 systemd[1]: var-lib-containers-storage-overlay-d538fdb29388ffe1ef264adc01aff4c3aba791605df961f34a2a9935cf9d70bf-merged.mount: Deactivated successfully.
Jan 31 04:58:24 np0005603787 podman[86212]: 2026-01-31 09:58:24.121176677 +0000 UTC m=+0.171786874 container remove 91393058143c441fa191f4724b3efd0ec25c0883cd8a12f2dc3d64b21d497f58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_pike, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 04:58:24 np0005603787 systemd[1]: libpod-conmon-91393058143c441fa191f4724b3efd0ec25c0883cd8a12f2dc3d64b21d497f58.scope: Deactivated successfully.
Jan 31 04:58:24 np0005603787 podman[86470]: 2026-01-31 09:58:24.282039163 +0000 UTC m=+0.031619668 container create ca701d648c473d5aba592ef748e80180def62fa1e936a6a464f4d19da53a3e12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-1-activate-test, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:58:24 np0005603787 systemd[1]: Started libpod-conmon-ca701d648c473d5aba592ef748e80180def62fa1e936a6a464f4d19da53a3e12.scope.
Jan 31 04:58:24 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:24 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6b8ff53c28e7ed518561c89dcf5fdc766bd41de55a32fbb5cd41249a4f1dec8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:24 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6b8ff53c28e7ed518561c89dcf5fdc766bd41de55a32fbb5cd41249a4f1dec8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:24 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6b8ff53c28e7ed518561c89dcf5fdc766bd41de55a32fbb5cd41249a4f1dec8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:24 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6b8ff53c28e7ed518561c89dcf5fdc766bd41de55a32fbb5cd41249a4f1dec8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:24 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6b8ff53c28e7ed518561c89dcf5fdc766bd41de55a32fbb5cd41249a4f1dec8/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:24 np0005603787 podman[86470]: 2026-01-31 09:58:24.267008845 +0000 UTC m=+0.016589370 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:58:24 np0005603787 podman[86470]: 2026-01-31 09:58:24.372764496 +0000 UTC m=+0.122345101 container init ca701d648c473d5aba592ef748e80180def62fa1e936a6a464f4d19da53a3e12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-1-activate-test, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 04:58:24 np0005603787 podman[86470]: 2026-01-31 09:58:24.383315423 +0000 UTC m=+0.132895928 container start ca701d648c473d5aba592ef748e80180def62fa1e936a6a464f4d19da53a3e12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-1-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:58:24 np0005603787 podman[86470]: 2026-01-31 09:58:24.387948629 +0000 UTC m=+0.137529144 container attach ca701d648c473d5aba592ef748e80180def62fa1e936a6a464f4d19da53a3e12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-1-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 04:58:24 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:24 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:24 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Jan 31 04:58:24 np0005603787 ceph-mon[75160]: Deploying daemon osd.1 on compute-0
Jan 31 04:58:24 np0005603787 ceph-mon[75160]: from='osd.0 [v2:192.168.122.100:6802/2238925525,v1:192.168.122.100:6803/2238925525]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Jan 31 04:58:24 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-1-activate-test[86486]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 31 04:58:24 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-1-activate-test[86486]:                            [--no-systemd] [--no-tmpfs]
Jan 31 04:58:24 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-1-activate-test[86486]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 31 04:58:24 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Jan 31 04:58:24 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 04:58:24 np0005603787 systemd[1]: libpod-ca701d648c473d5aba592ef748e80180def62fa1e936a6a464f4d19da53a3e12.scope: Deactivated successfully.
Jan 31 04:58:24 np0005603787 podman[86470]: 2026-01-31 09:58:24.589699415 +0000 UTC m=+0.339279910 container died ca701d648c473d5aba592ef748e80180def62fa1e936a6a464f4d19da53a3e12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-1-activate-test, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 04:58:24 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2238925525,v1:192.168.122.100:6803/2238925525]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 31 04:58:24 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Jan 31 04:58:24 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Jan 31 04:58:24 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 31 04:58:24 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2238925525,v1:192.168.122.100:6803/2238925525]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 31 04:58:24 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 31 04:58:24 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 04:58:24 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 04:58:24 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 04:58:24 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 04:58:24 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 04:58:24 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 04:58:24 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 04:58:24 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 04:58:24 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 04:58:24 np0005603787 systemd[1]: var-lib-containers-storage-overlay-f6b8ff53c28e7ed518561c89dcf5fdc766bd41de55a32fbb5cd41249a4f1dec8-merged.mount: Deactivated successfully.
Jan 31 04:58:24 np0005603787 podman[86470]: 2026-01-31 09:58:24.643117275 +0000 UTC m=+0.392697780 container remove ca701d648c473d5aba592ef748e80180def62fa1e936a6a464f4d19da53a3e12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-1-activate-test, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 04:58:24 np0005603787 systemd[1]: libpod-conmon-ca701d648c473d5aba592ef748e80180def62fa1e936a6a464f4d19da53a3e12.scope: Deactivated successfully.
Jan 31 04:58:24 np0005603787 systemd[1]: Reloading.
Jan 31 04:58:24 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:58:24 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 04:58:25 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 04:58:25 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 31 04:58:25 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 31 04:58:25 np0005603787 systemd[1]: Reloading.
Jan 31 04:58:25 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:58:25 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 04:58:25 np0005603787 systemd[1]: Starting Ceph osd.1 for 962d77ae-dc67-5de8-89d8-3d1670c67b61...
Jan 31 04:58:25 np0005603787 podman[86649]: 2026-01-31 09:58:25.492839991 +0000 UTC m=+0.045002753 container create e372d9e2381ee57791a2c35734f676d6512b568b02f17e99655749c98db03e68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-1-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:58:25 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:25 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ce822cfa8ca8f5a461eb33030f14383d74b5605a8aa037583bf0aa953c017f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:25 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ce822cfa8ca8f5a461eb33030f14383d74b5605a8aa037583bf0aa953c017f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:25 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ce822cfa8ca8f5a461eb33030f14383d74b5605a8aa037583bf0aa953c017f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:25 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ce822cfa8ca8f5a461eb33030f14383d74b5605a8aa037583bf0aa953c017f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:25 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ce822cfa8ca8f5a461eb33030f14383d74b5605a8aa037583bf0aa953c017f3/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:25 np0005603787 podman[86649]: 2026-01-31 09:58:25.473702161 +0000 UTC m=+0.025864943 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:58:25 np0005603787 podman[86649]: 2026-01-31 09:58:25.58565254 +0000 UTC m=+0.137815422 container init e372d9e2381ee57791a2c35734f676d6512b568b02f17e99655749c98db03e68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:58:25 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Jan 31 04:58:25 np0005603787 podman[86649]: 2026-01-31 09:58:25.599532117 +0000 UTC m=+0.151694849 container start e372d9e2381ee57791a2c35734f676d6512b568b02f17e99655749c98db03e68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-1-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:58:25 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 04:58:25 np0005603787 podman[86649]: 2026-01-31 09:58:25.614519034 +0000 UTC m=+0.166681766 container attach e372d9e2381ee57791a2c35734f676d6512b568b02f17e99655749c98db03e68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-1-activate, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 04:58:25 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2238925525,v1:192.168.122.100:6803/2238925525]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 31 04:58:25 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Jan 31 04:58:25 np0005603787 ceph-osd[85879]: osd.0 0 done with init, starting boot process
Jan 31 04:58:25 np0005603787 ceph-osd[85879]: osd.0 0 start_boot
Jan 31 04:58:25 np0005603787 ceph-osd[85879]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 31 04:58:25 np0005603787 ceph-osd[85879]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 31 04:58:25 np0005603787 ceph-osd[85879]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 31 04:58:25 np0005603787 ceph-osd[85879]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 31 04:58:25 np0005603787 ceph-osd[85879]: osd.0 0  bench count 12288000 bsize 4 KiB
Jan 31 04:58:25 np0005603787 ceph-mon[75160]: from='osd.0 [v2:192.168.122.100:6802/2238925525,v1:192.168.122.100:6803/2238925525]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 31 04:58:25 np0005603787 ceph-mon[75160]: from='osd.0 [v2:192.168.122.100:6802/2238925525,v1:192.168.122.100:6803/2238925525]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 31 04:58:25 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Jan 31 04:58:25 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 04:58:25 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 04:58:25 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 04:58:25 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 04:58:25 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 04:58:25 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 04:58:25 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 04:58:25 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 04:58:25 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 04:58:25 np0005603787 ceph-mgr[75453]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2238925525; not ready for session (expect reconnect)
Jan 31 04:58:25 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 04:58:25 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 04:58:25 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 04:58:25 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-1-activate[86664]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 04:58:25 np0005603787 bash[86649]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 04:58:25 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-1-activate[86664]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 04:58:25 np0005603787 bash[86649]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 04:58:25 np0005603787 ceph-mgr[75453]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 04:58:26 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 04:58:26 np0005603787 lvm[86747]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 04:58:26 np0005603787 lvm[86750]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 04:58:26 np0005603787 lvm[86750]: VG ceph_vg1 finished
Jan 31 04:58:26 np0005603787 lvm[86747]: VG ceph_vg0 finished
Jan 31 04:58:26 np0005603787 lvm[86752]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 04:58:26 np0005603787 lvm[86752]: VG ceph_vg2 finished
Jan 31 04:58:26 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-1-activate[86664]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 31 04:58:26 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-1-activate[86664]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 04:58:26 np0005603787 bash[86649]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 31 04:58:26 np0005603787 bash[86649]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 04:58:26 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-1-activate[86664]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 04:58:26 np0005603787 bash[86649]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 04:58:26 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-1-activate[86664]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 31 04:58:26 np0005603787 bash[86649]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 31 04:58:26 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-1-activate[86664]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 31 04:58:26 np0005603787 bash[86649]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 31 04:58:26 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-1-activate[86664]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 31 04:58:26 np0005603787 bash[86649]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 31 04:58:26 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-1-activate[86664]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 31 04:58:26 np0005603787 bash[86649]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 31 04:58:26 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-1-activate[86664]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 31 04:58:26 np0005603787 bash[86649]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 31 04:58:26 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-1-activate[86664]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 31 04:58:26 np0005603787 bash[86649]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 31 04:58:26 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-1-activate[86664]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 31 04:58:26 np0005603787 bash[86649]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 31 04:58:26 np0005603787 systemd[1]: libpod-e372d9e2381ee57791a2c35734f676d6512b568b02f17e99655749c98db03e68.scope: Deactivated successfully.
Jan 31 04:58:26 np0005603787 systemd[1]: libpod-e372d9e2381ee57791a2c35734f676d6512b568b02f17e99655749c98db03e68.scope: Consumed 1.328s CPU time.
Jan 31 04:58:26 np0005603787 ceph-mgr[75453]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2238925525; not ready for session (expect reconnect)
Jan 31 04:58:26 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 04:58:26 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 04:58:26 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 04:58:26 np0005603787 podman[86858]: 2026-01-31 09:58:26.688625571 +0000 UTC m=+0.020507338 container died e372d9e2381ee57791a2c35734f676d6512b568b02f17e99655749c98db03e68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:58:26 np0005603787 ceph-mon[75160]: from='osd.0 [v2:192.168.122.100:6802/2238925525,v1:192.168.122.100:6803/2238925525]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 31 04:58:27 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 04:58:27 np0005603787 systemd[1]: var-lib-containers-storage-overlay-3ce822cfa8ca8f5a461eb33030f14383d74b5605a8aa037583bf0aa953c017f3-merged.mount: Deactivated successfully.
Jan 31 04:58:27 np0005603787 podman[86858]: 2026-01-31 09:58:27.305825344 +0000 UTC m=+0.637707081 container remove e372d9e2381ee57791a2c35734f676d6512b568b02f17e99655749c98db03e68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-1-activate, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:58:27 np0005603787 podman[86914]: 2026-01-31 09:58:27.513974824 +0000 UTC m=+0.070788062 container create c50175b83e0d912533584687dc5ff046a5fff119de8c1d67f93048b8e8f85501 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 04:58:27 np0005603787 podman[86914]: 2026-01-31 09:58:27.463213126 +0000 UTC m=+0.020026394 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:58:27 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71c3b80edd96f8c562ce81680111564c588cce36b187f43ec6e0b6ee4c6a5c04/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:27 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71c3b80edd96f8c562ce81680111564c588cce36b187f43ec6e0b6ee4c6a5c04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:27 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71c3b80edd96f8c562ce81680111564c588cce36b187f43ec6e0b6ee4c6a5c04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:27 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71c3b80edd96f8c562ce81680111564c588cce36b187f43ec6e0b6ee4c6a5c04/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:27 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71c3b80edd96f8c562ce81680111564c588cce36b187f43ec6e0b6ee4c6a5c04/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:27 np0005603787 podman[86914]: 2026-01-31 09:58:27.638787423 +0000 UTC m=+0.195600661 container init c50175b83e0d912533584687dc5ff046a5fff119de8c1d67f93048b8e8f85501 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:58:27 np0005603787 podman[86914]: 2026-01-31 09:58:27.644058055 +0000 UTC m=+0.200871313 container start c50175b83e0d912533584687dc5ff046a5fff119de8c1d67f93048b8e8f85501 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-1, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:58:27 np0005603787 bash[86914]: c50175b83e0d912533584687dc5ff046a5fff119de8c1d67f93048b8e8f85501
Jan 31 04:58:27 np0005603787 systemd[1]: Started Ceph osd.1 for 962d77ae-dc67-5de8-89d8-3d1670c67b61.
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 04:58:27 np0005603787 ceph-mgr[75453]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2238925525; not ready for session (expect reconnect)
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: pidfile_write: ignore empty --pid-file
Jan 31 04:58:27 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 04:58:27 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 04:58:27 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb26000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb26000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb26000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb26000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb26000 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb26000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb26000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb26000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb26000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb26000 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb26000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb26000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb26000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb26000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb26000 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 04:58:27 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb26000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb26000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb26000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb26000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb26000 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 04:58:27 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:27 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb26000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb26000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb26000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb26000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb26000 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb26000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb26000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb26000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb26000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb26400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb26400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb26400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb26400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb26400 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 04:58:27 np0005603787 ceph-mgr[75453]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 04:58:27 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:27 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Jan 31 04:58:27 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Jan 31 04:58:27 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 04:58:27 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 04:58:27 np0005603787 ceph-mgr[75453]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Jan 31 04:58:27 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb26000 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 04:58:27 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: load: jerasure load: lrc 
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb27c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb27c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb27c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb27c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb27c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb27c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb27c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb27c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb27c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb27c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb27c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb27c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb27c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb27c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb27c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb27c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb27c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb27c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb27c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb27c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb27c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb27c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb27c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb27c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:27 np0005603787 ceph-osd[86934]: bdev(0x558dafb27c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: bdev(0x558dafb27c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: bdev(0x558dafb27c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: bdev(0x558dafb27c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: bdev(0x558dafb27c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: bdev(0x558db07c7800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: bdev(0x558db07c7800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: bdev(0x558db07c7800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: bdev(0x558db07c7800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: bluefs mount
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: bluefs mount shared_bdev_used = 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: RocksDB version: 7.9.2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Git sha 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: DB SUMMARY
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: DB Session ID:  MYGVM5UM26NUCO34OLHV
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: CURRENT file:  CURRENT
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                         Options.error_if_exists: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                       Options.create_if_missing: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                                     Options.env: 0x558daf9b7ea0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                                Options.info_log: 0x558db0a128a0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                              Options.statistics: (nil)
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                               Options.use_fsync: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                              Options.db_log_dir: 
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                                 Options.wal_dir: db.wal
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.write_buffer_manager: 0x558db08b8b40
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.unordered_write: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                               Options.row_cache: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                              Options.wal_filter: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.two_write_queues: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.wal_compression: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.atomic_flush: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.max_background_jobs: 4
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.max_background_compactions: -1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.max_subcompactions: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.max_open_files: -1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Compression algorithms supported:
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: #011kZSTD supported: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: #011kXpressCompression supported: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: #011kBZip2Compression supported: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: #011kLZ4Compression supported: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: #011kZlibCompression supported: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: #011kSnappyCompression supported: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558db0a12c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558daf9bb8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558db0a12c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558daf9bb8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558db0a12c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558daf9bb8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558db0a12c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558daf9bb8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558db0a12c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558daf9bb8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558db0a12c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558daf9bb8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558db0a12c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558daf9bb8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558db0a12c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558daf9bba30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558db0a12c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558daf9bba30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558db0a12c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558daf9bba30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 9a091446-c12e-4981-8950-d72fc7aacd5d
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769853508022859, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769853508023896, "job": 1, "event": "recovery_finished"}
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: freelist init
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: freelist _read_cfg
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: bluefs umount
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: bdev(0x558db07c7800 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: bdev(0x558db07c7800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: bdev(0x558db07c7800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: bdev(0x558db07c7800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: bdev(0x558db07c7800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: bluefs mount
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: bluefs mount shared_bdev_used = 27262976
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: RocksDB version: 7.9.2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Git sha 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: DB SUMMARY
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: DB Session ID:  MYGVM5UM26NUCO34OLHU
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: CURRENT file:  CURRENT
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                         Options.error_if_exists: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                       Options.create_if_missing: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                                     Options.env: 0x558daf9b7a40
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                                Options.info_log: 0x558db0a13b00
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                              Options.statistics: (nil)
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                               Options.use_fsync: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                              Options.db_log_dir: 
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                                 Options.wal_dir: db.wal
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.write_buffer_manager: 0x558db08b9900
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.unordered_write: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                               Options.row_cache: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                              Options.wal_filter: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.two_write_queues: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.wal_compression: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.atomic_flush: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.max_background_jobs: 4
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.max_background_compactions: -1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.max_subcompactions: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.max_open_files: -1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Compression algorithms supported:
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: #011kZSTD supported: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: #011kXpressCompression supported: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: #011kBZip2Compression supported: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: #011kLZ4Compression supported: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: #011kZlibCompression supported: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: #011kSnappyCompression supported: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558db0a4c220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558daf9bba30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558db0a4c220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558daf9bba30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558db0a4c220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558daf9bba30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558db0a4c220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558daf9bba30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558db0a4c220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558daf9bba30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558db0a4c220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558daf9bba30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558db0a4c220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558daf9bba30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558db0a4c300)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558daf9bb4b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558db0a4c300)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558daf9bb4b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558db0a4c300)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558daf9bb4b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 9a091446-c12e-4981-8950-d72fc7aacd5d
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769853508067213, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769853508096465, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769853508, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a091446-c12e-4981-8950-d72fc7aacd5d", "db_session_id": "MYGVM5UM26NUCO34OLHU", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769853508141940, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769853508, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a091446-c12e-4981-8950-d72fc7aacd5d", "db_session_id": "MYGVM5UM26NUCO34OLHU", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769853508276700, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769853508, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a091446-c12e-4981-8950-d72fc7aacd5d", "db_session_id": "MYGVM5UM26NUCO34OLHU", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769853508283288, "job": 1, "event": "recovery_finished"}
Jan 31 04:58:28 np0005603787 ceph-osd[86934]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 31 04:58:28 np0005603787 podman[87439]: 2026-01-31 09:58:28.295482719 +0000 UTC m=+0.025244396 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:58:28 np0005603787 ceph-mgr[75453]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2238925525; not ready for session (expect reconnect)
Jan 31 04:58:28 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 04:58:28 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 04:58:28 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 04:58:28 np0005603787 podman[87439]: 2026-01-31 09:58:28.757621634 +0000 UTC m=+0.487383291 container create bf7790382e9aaefb50cfbacf86711b45b489ce23bb741a8518f65d74c2177946 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 04:58:29 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 04:58:29 np0005603787 systemd[1]: Started libpod-conmon-bf7790382e9aaefb50cfbacf86711b45b489ce23bb741a8518f65d74c2177946.scope.
Jan 31 04:58:29 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:29 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:29 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Jan 31 04:58:29 np0005603787 ceph-mon[75160]: Deploying daemon osd.2 on compute-0
Jan 31 04:58:29 np0005603787 podman[87439]: 2026-01-31 09:58:29.537560385 +0000 UTC m=+1.267322072 container init bf7790382e9aaefb50cfbacf86711b45b489ce23bb741a8518f65d74c2177946 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:58:29 np0005603787 podman[87439]: 2026-01-31 09:58:29.545982494 +0000 UTC m=+1.275744191 container start bf7790382e9aaefb50cfbacf86711b45b489ce23bb741a8518f65d74c2177946 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hellman, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:58:29 np0005603787 eloquent_hellman[87455]: 167 167
Jan 31 04:58:29 np0005603787 systemd[1]: libpod-bf7790382e9aaefb50cfbacf86711b45b489ce23bb741a8518f65d74c2177946.scope: Deactivated successfully.
Jan 31 04:58:29 np0005603787 conmon[87455]: conmon bf7790382e9aaefb50cf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bf7790382e9aaefb50cfbacf86711b45b489ce23bb741a8518f65d74c2177946.scope/container/memory.events
Jan 31 04:58:29 np0005603787 ceph-mgr[75453]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2238925525; not ready for session (expect reconnect)
Jan 31 04:58:29 np0005603787 ceph-mgr[75453]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 04:58:29 np0005603787 podman[87439]: 2026-01-31 09:58:29.986591122 +0000 UTC m=+1.716352779 container attach bf7790382e9aaefb50cfbacf86711b45b489ce23bb741a8518f65d74c2177946 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hellman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 04:58:29 np0005603787 podman[87439]: 2026-01-31 09:58:29.987589129 +0000 UTC m=+1.717350786 container died bf7790382e9aaefb50cfbacf86711b45b489ce23bb741a8518f65d74c2177946 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 04:58:30 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 04:58:30 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 04:58:30 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 04:58:30 np0005603787 ceph-osd[86934]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x558db0c1bc00
Jan 31 04:58:30 np0005603787 ceph-osd[86934]: rocksdb: DB pointer 0x558db0bcc000
Jan 31 04:58:30 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 31 04:58:30 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Jan 31 04:58:30 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Jan 31 04:58:30 np0005603787 ceph-osd[86934]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 04:58:30 np0005603787 ceph-osd[86934]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2.2 total, 2.2 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2.2 total, 2.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558daf9bba30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2.2 total, 2.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558daf9bba30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2.2 total, 2.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558daf9bba30#2 capacity: 460.80 MB usag
Jan 31 04:58:30 np0005603787 ceph-osd[86934]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 31 04:58:30 np0005603787 ceph-osd[86934]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 31 04:58:30 np0005603787 ceph-osd[86934]: _get_class not permitted to load lua
Jan 31 04:58:30 np0005603787 ceph-osd[86934]: _get_class not permitted to load sdk
Jan 31 04:58:30 np0005603787 ceph-osd[86934]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 31 04:58:30 np0005603787 ceph-osd[86934]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 31 04:58:30 np0005603787 ceph-osd[86934]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 31 04:58:30 np0005603787 ceph-osd[86934]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 31 04:58:30 np0005603787 ceph-osd[86934]: osd.1 0 load_pgs
Jan 31 04:58:30 np0005603787 ceph-osd[86934]: osd.1 0 load_pgs opened 0 pgs
Jan 31 04:58:30 np0005603787 ceph-osd[86934]: osd.1 0 log_to_monitors true
Jan 31 04:58:30 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-1[86930]: 2026-01-31T09:58:30.213+0000 7fd8d8c668c0 -1 osd.1 0 log_to_monitors true
Jan 31 04:58:30 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Jan 31 04:58:30 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/3037892145,v1:192.168.122.100:6807/3037892145]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Jan 31 04:58:30 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Jan 31 04:58:30 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 04:58:30 np0005603787 systemd[1]: var-lib-containers-storage-overlay-b53c0f42e588b44c31ce3e6070550f789d601728b45912a8faa11553fd82248c-merged.mount: Deactivated successfully.
Jan 31 04:58:30 np0005603787 ceph-mon[75160]: from='osd.1 [v2:192.168.122.100:6806/3037892145,v1:192.168.122.100:6807/3037892145]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Jan 31 04:58:30 np0005603787 ceph-mgr[75453]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2238925525; not ready for session (expect reconnect)
Jan 31 04:58:30 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 04:58:30 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 04:58:30 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 04:58:30 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/3037892145,v1:192.168.122.100:6807/3037892145]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 31 04:58:30 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e9 e9: 3 total, 0 up, 3 in
Jan 31 04:58:30 np0005603787 podman[87439]: 2026-01-31 09:58:30.757339656 +0000 UTC m=+2.487101313 container remove bf7790382e9aaefb50cfbacf86711b45b489ce23bb741a8518f65d74c2177946 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:58:30 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 0 up, 3 in
Jan 31 04:58:30 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 31 04:58:30 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/3037892145,v1:192.168.122.100:6807/3037892145]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 31 04:58:30 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 31 04:58:30 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 04:58:30 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 04:58:30 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 04:58:30 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 04:58:30 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 04:58:30 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 04:58:30 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 04:58:30 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 04:58:30 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 04:58:30 np0005603787 systemd[1]: libpod-conmon-bf7790382e9aaefb50cfbacf86711b45b489ce23bb741a8518f65d74c2177946.scope: Deactivated successfully.
Jan 31 04:58:31 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 04:58:31 np0005603787 podman[87519]: 2026-01-31 09:58:31.08865952 +0000 UTC m=+0.111364544 container create a9e89c2b27f468e741b04f1616b622190bf9feb6cfb972983cd08e9f73277bad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-2-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:58:31 np0005603787 podman[87519]: 2026-01-31 09:58:31.001313019 +0000 UTC m=+0.024018063 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:58:31 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 31 04:58:31 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 31 04:58:31 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 04:58:31 np0005603787 systemd[1]: Started libpod-conmon-a9e89c2b27f468e741b04f1616b622190bf9feb6cfb972983cd08e9f73277bad.scope.
Jan 31 04:58:31 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:31 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/377a3e0c9350458d0271a59a861b9e835b630eead00b9e8fe35944247ff1e7f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:31 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/377a3e0c9350458d0271a59a861b9e835b630eead00b9e8fe35944247ff1e7f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:31 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/377a3e0c9350458d0271a59a861b9e835b630eead00b9e8fe35944247ff1e7f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:31 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/377a3e0c9350458d0271a59a861b9e835b630eead00b9e8fe35944247ff1e7f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:31 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/377a3e0c9350458d0271a59a861b9e835b630eead00b9e8fe35944247ff1e7f1/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:31 np0005603787 podman[87519]: 2026-01-31 09:58:31.674421719 +0000 UTC m=+0.697126763 container init a9e89c2b27f468e741b04f1616b622190bf9feb6cfb972983cd08e9f73277bad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-2-activate-test, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:58:31 np0005603787 podman[87519]: 2026-01-31 09:58:31.681103721 +0000 UTC m=+0.703808735 container start a9e89c2b27f468e741b04f1616b622190bf9feb6cfb972983cd08e9f73277bad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-2-activate-test, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:58:31 np0005603787 ceph-mgr[75453]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2238925525; not ready for session (expect reconnect)
Jan 31 04:58:31 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 04:58:31 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 04:58:31 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 04:58:31 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Jan 31 04:58:31 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 04:58:31 np0005603787 ceph-mgr[75453]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 04:58:31 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-2-activate-test[87536]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 31 04:58:31 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-2-activate-test[87536]:                            [--no-systemd] [--no-tmpfs]
Jan 31 04:58:31 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-2-activate-test[87536]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 31 04:58:31 np0005603787 systemd[1]: libpod-a9e89c2b27f468e741b04f1616b622190bf9feb6cfb972983cd08e9f73277bad.scope: Deactivated successfully.
Jan 31 04:58:31 np0005603787 podman[87519]: 2026-01-31 09:58:31.942834915 +0000 UTC m=+0.965539949 container attach a9e89c2b27f468e741b04f1616b622190bf9feb6cfb972983cd08e9f73277bad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-2-activate-test, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 04:58:31 np0005603787 podman[87519]: 2026-01-31 09:58:31.943203725 +0000 UTC m=+0.965908749 container died a9e89c2b27f468e741b04f1616b622190bf9feb6cfb972983cd08e9f73277bad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-2-activate-test, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 04:58:31 np0005603787 ceph-mon[75160]: from='osd.1 [v2:192.168.122.100:6806/3037892145,v1:192.168.122.100:6807/3037892145]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 31 04:58:31 np0005603787 ceph-mon[75160]: from='osd.1 [v2:192.168.122.100:6806/3037892145,v1:192.168.122.100:6807/3037892145]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 31 04:58:32 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/3037892145,v1:192.168.122.100:6807/3037892145]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 31 04:58:32 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e10 e10: 3 total, 0 up, 3 in
Jan 31 04:58:32 np0005603787 ceph-osd[86934]: osd.1 0 done with init, starting boot process
Jan 31 04:58:32 np0005603787 ceph-osd[86934]: osd.1 0 start_boot
Jan 31 04:58:32 np0005603787 ceph-osd[86934]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 31 04:58:32 np0005603787 ceph-osd[86934]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 31 04:58:32 np0005603787 ceph-osd[86934]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 31 04:58:32 np0005603787 ceph-osd[86934]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 31 04:58:32 np0005603787 ceph-osd[86934]: osd.1 0  bench count 12288000 bsize 4 KiB
Jan 31 04:58:32 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 0 up, 3 in
Jan 31 04:58:32 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 04:58:32 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 04:58:32 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 04:58:32 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 04:58:32 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 04:58:32 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 04:58:32 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 04:58:32 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 04:58:32 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 04:58:32 np0005603787 ceph-mgr[75453]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3037892145; not ready for session (expect reconnect)
Jan 31 04:58:32 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 04:58:32 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 04:58:32 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 04:58:32 np0005603787 systemd[1]: var-lib-containers-storage-overlay-377a3e0c9350458d0271a59a861b9e835b630eead00b9e8fe35944247ff1e7f1-merged.mount: Deactivated successfully.
Jan 31 04:58:32 np0005603787 podman[87519]: 2026-01-31 09:58:32.529570373 +0000 UTC m=+1.552275407 container remove a9e89c2b27f468e741b04f1616b622190bf9feb6cfb972983cd08e9f73277bad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-2-activate-test, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True)
Jan 31 04:58:32 np0005603787 systemd[1]: libpod-conmon-a9e89c2b27f468e741b04f1616b622190bf9feb6cfb972983cd08e9f73277bad.scope: Deactivated successfully.
Jan 31 04:58:32 np0005603787 ceph-mgr[75453]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2238925525; not ready for session (expect reconnect)
Jan 31 04:58:32 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 04:58:32 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 04:58:32 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 04:58:33 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 04:58:33 np0005603787 ceph-mon[75160]: from='osd.1 [v2:192.168.122.100:6806/3037892145,v1:192.168.122.100:6807/3037892145]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 31 04:58:33 np0005603787 ceph-mgr[75453]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3037892145; not ready for session (expect reconnect)
Jan 31 04:58:33 np0005603787 systemd[1]: Reloading.
Jan 31 04:58:33 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 04:58:33 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 04:58:33 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 04:58:33 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 04:58:33 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:58:33 np0005603787 ceph-mgr[75453]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2238925525; not ready for session (expect reconnect)
Jan 31 04:58:33 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 04:58:33 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 04:58:33 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 04:58:33 np0005603787 systemd[1]: Reloading.
Jan 31 04:58:33 np0005603787 ceph-mgr[75453]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 04:58:33 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:58:33 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 04:58:34 np0005603787 systemd[1]: Starting Ceph osd.2 for 962d77ae-dc67-5de8-89d8-3d1670c67b61...
Jan 31 04:58:34 np0005603787 ceph-mgr[75453]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3037892145; not ready for session (expect reconnect)
Jan 31 04:58:34 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 04:58:34 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 04:58:34 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 04:58:34 np0005603787 podman[87699]: 2026-01-31 09:58:34.376038855 +0000 UTC m=+0.111535408 container create 59a0d20cf084183a99c32f597f04717474f64af37ac231efe4e3364d6cf0cdee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-2-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:58:34 np0005603787 podman[87699]: 2026-01-31 09:58:34.28597019 +0000 UTC m=+0.021466773 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:58:34 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:34 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f846c5989122ea871c9e62e610e3576bc9756b5883b7686c536bc3280cc37885/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:34 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f846c5989122ea871c9e62e610e3576bc9756b5883b7686c536bc3280cc37885/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:34 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f846c5989122ea871c9e62e610e3576bc9756b5883b7686c536bc3280cc37885/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:34 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f846c5989122ea871c9e62e610e3576bc9756b5883b7686c536bc3280cc37885/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:34 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f846c5989122ea871c9e62e610e3576bc9756b5883b7686c536bc3280cc37885/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:34 np0005603787 podman[87699]: 2026-01-31 09:58:34.554320895 +0000 UTC m=+0.289817468 container init 59a0d20cf084183a99c32f597f04717474f64af37ac231efe4e3364d6cf0cdee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-2-activate, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 04:58:34 np0005603787 podman[87699]: 2026-01-31 09:58:34.559171776 +0000 UTC m=+0.294668339 container start 59a0d20cf084183a99c32f597f04717474f64af37ac231efe4e3364d6cf0cdee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-2-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 04:58:34 np0005603787 podman[87699]: 2026-01-31 09:58:34.609186444 +0000 UTC m=+0.344683027 container attach 59a0d20cf084183a99c32f597f04717474f64af37ac231efe4e3364d6cf0cdee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-2-activate, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 04:58:34 np0005603787 ceph-mgr[75453]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2238925525; not ready for session (expect reconnect)
Jan 31 04:58:34 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 04:58:34 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 04:58:34 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 04:58:34 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-2-activate[87716]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 04:58:34 np0005603787 bash[87699]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 04:58:34 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-2-activate[87716]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 04:58:34 np0005603787 bash[87699]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 04:58:35 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 04:58:35 np0005603787 ceph-mgr[75453]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3037892145; not ready for session (expect reconnect)
Jan 31 04:58:35 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 04:58:35 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 04:58:35 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 04:58:35 np0005603787 lvm[87801]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 04:58:35 np0005603787 lvm[87802]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 04:58:35 np0005603787 lvm[87802]: VG ceph_vg1 finished
Jan 31 04:58:35 np0005603787 lvm[87801]: VG ceph_vg0 finished
Jan 31 04:58:35 np0005603787 lvm[87804]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 04:58:35 np0005603787 lvm[87804]: VG ceph_vg2 finished
Jan 31 04:58:35 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-2-activate[87716]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 31 04:58:35 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-2-activate[87716]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 04:58:35 np0005603787 bash[87699]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 31 04:58:35 np0005603787 bash[87699]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 04:58:35 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-2-activate[87716]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 04:58:35 np0005603787 bash[87699]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 04:58:35 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-2-activate[87716]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 31 04:58:35 np0005603787 bash[87699]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 31 04:58:35 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-2-activate[87716]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 31 04:58:35 np0005603787 bash[87699]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 31 04:58:35 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-2-activate[87716]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 31 04:58:35 np0005603787 bash[87699]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 31 04:58:35 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-2-activate[87716]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 31 04:58:35 np0005603787 bash[87699]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 31 04:58:35 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-2-activate[87716]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 31 04:58:35 np0005603787 bash[87699]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 31 04:58:35 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-2-activate[87716]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 31 04:58:35 np0005603787 bash[87699]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 31 04:58:35 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-2-activate[87716]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 31 04:58:35 np0005603787 bash[87699]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 31 04:58:35 np0005603787 systemd[1]: libpod-59a0d20cf084183a99c32f597f04717474f64af37ac231efe4e3364d6cf0cdee.scope: Deactivated successfully.
Jan 31 04:58:35 np0005603787 systemd[1]: libpod-59a0d20cf084183a99c32f597f04717474f64af37ac231efe4e3364d6cf0cdee.scope: Consumed 1.303s CPU time.
Jan 31 04:58:35 np0005603787 podman[87918]: 2026-01-31 09:58:35.674853541 +0000 UTC m=+0.026442159 container died 59a0d20cf084183a99c32f597f04717474f64af37ac231efe4e3364d6cf0cdee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-2-activate, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:58:35 np0005603787 ceph-mgr[75453]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2238925525; not ready for session (expect reconnect)
Jan 31 04:58:35 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 04:58:35 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 04:58:35 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 04:58:35 np0005603787 ceph-mgr[75453]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 04:58:36 np0005603787 systemd[1]: var-lib-containers-storage-overlay-f846c5989122ea871c9e62e610e3576bc9756b5883b7686c536bc3280cc37885-merged.mount: Deactivated successfully.
Jan 31 04:58:36 np0005603787 podman[87918]: 2026-01-31 09:58:36.17026767 +0000 UTC m=+0.521856288 container remove 59a0d20cf084183a99c32f597f04717474f64af37ac231efe4e3364d6cf0cdee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-2-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:58:36 np0005603787 ceph-mgr[75453]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3037892145; not ready for session (expect reconnect)
Jan 31 04:58:36 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 04:58:36 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 04:58:36 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 04:58:36 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e10 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 04:58:36 np0005603787 podman[87976]: 2026-01-31 09:58:36.348971391 +0000 UTC m=+0.064198654 container create 1afffe8560795475eaea589964a4866bebefc730f550a6d5acd0b25a61e22204 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-2, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:58:36 np0005603787 podman[87976]: 2026-01-31 09:58:36.306692782 +0000 UTC m=+0.021920055 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:58:36 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46e62fc4f48ba1d722fcb08080325e128b028dbe921a267b56f6908e64b6933a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:36 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46e62fc4f48ba1d722fcb08080325e128b028dbe921a267b56f6908e64b6933a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:36 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46e62fc4f48ba1d722fcb08080325e128b028dbe921a267b56f6908e64b6933a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:36 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46e62fc4f48ba1d722fcb08080325e128b028dbe921a267b56f6908e64b6933a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:36 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46e62fc4f48ba1d722fcb08080325e128b028dbe921a267b56f6908e64b6933a/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:36 np0005603787 podman[87976]: 2026-01-31 09:58:36.479847263 +0000 UTC m=+0.195074536 container init 1afffe8560795475eaea589964a4866bebefc730f550a6d5acd0b25a61e22204 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:58:36 np0005603787 podman[87976]: 2026-01-31 09:58:36.483816061 +0000 UTC m=+0.199043304 container start 1afffe8560795475eaea589964a4866bebefc730f550a6d5acd0b25a61e22204 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-2, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: pidfile_write: ignore empty --pid-file
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd144000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd144000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd144000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd144000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd144000 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 04:58:36 np0005603787 ceph-osd[85879]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 9.569 iops: 2449.711 elapsed_sec: 1.225
Jan 31 04:58:36 np0005603787 ceph-osd[85879]: log_channel(cluster) log [WRN] : OSD bench result of 2449.711462 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 04:58:36 np0005603787 bash[87976]: 1afffe8560795475eaea589964a4866bebefc730f550a6d5acd0b25a61e22204
Jan 31 04:58:36 np0005603787 ceph-osd[85879]: osd.0 0 waiting for initial osdmap
Jan 31 04:58:36 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-0[85875]: 2026-01-31T09:58:36.525+0000 7f8eb6bad640 -1 osd.0 0 waiting for initial osdmap
Jan 31 04:58:36 np0005603787 systemd[1]: Started Ceph osd.2 for 962d77ae-dc67-5de8-89d8-3d1670c67b61.
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd144000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd144000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd144000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd144000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd144000 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 04:58:36 np0005603787 ceph-osd[85879]: osd.0 10 crush map has features 288514050185494528, adjusting msgr requires for clients
Jan 31 04:58:36 np0005603787 ceph-osd[85879]: osd.0 10 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Jan 31 04:58:36 np0005603787 ceph-osd[85879]: osd.0 10 crush map has features 3314932999778484224, adjusting msgr requires for osds
Jan 31 04:58:36 np0005603787 ceph-osd[85879]: osd.0 10 check_osdmap_features require_osd_release unknown -> tentacle
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd144000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd144000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd144000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd144000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd144000 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd144000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd144000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd144000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd144000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd144000 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 04:58:36 np0005603787 ceph-osd[85879]: osd.0 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 31 04:58:36 np0005603787 ceph-osd[85879]: osd.0 10 set_numa_affinity not setting numa affinity
Jan 31 04:58:36 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-0[85875]: 2026-01-31T09:58:36.610+0000 7f8eb19b2640 -1 osd.0 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 31 04:58:36 np0005603787 ceph-osd[85879]: osd.0 10 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Jan 31 04:58:36 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 04:58:36 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:36 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd144000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd144000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd144000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd144000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd144000 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd144000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd144000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd144000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd144000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd144400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd144400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd144400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd144400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd144400 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 04:58:36 np0005603787 ceph-mgr[75453]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2238925525; not ready for session (expect reconnect)
Jan 31 04:58:36 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:36 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 04:58:36 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 04:58:36 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd144000 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: load: jerasure load: lrc 
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd145c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd145c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd145c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd145c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd145c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd145c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd145c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd145c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd145c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd145c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd145c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd145c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd145c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd145c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd145c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd145c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd145c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd145c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd145c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd145c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd145c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd145c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd145c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd145c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd145c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd145c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd145c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd145c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8dd145c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8ddddb800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8ddddb800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8ddddb800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8ddddb800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bluefs mount
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bluefs mount shared_bdev_used = 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: RocksDB version: 7.9.2
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Git sha 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: DB SUMMARY
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: DB Session ID:  DCMGQW6IDRP0BGSN154F
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: CURRENT file:  CURRENT
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                         Options.error_if_exists: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                       Options.create_if_missing: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                                     Options.env: 0x55c8dcfd5ea0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                                Options.info_log: 0x55c8de0268a0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                              Options.statistics: (nil)
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                               Options.use_fsync: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                              Options.db_log_dir: 
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                                 Options.wal_dir: db.wal
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                    Options.write_buffer_manager: 0x55c8dd03ab40
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.unordered_write: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                               Options.row_cache: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                              Options.wal_filter: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.two_write_queues: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.wal_compression: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.atomic_flush: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.max_background_jobs: 4
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.max_background_compactions: -1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.max_subcompactions: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                          Options.max_open_files: -1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Compression algorithms supported:
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: #011kZSTD supported: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: #011kXpressCompression supported: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: #011kBZip2Compression supported: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: #011kLZ4Compression supported: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: #011kZlibCompression supported: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: #011kSnappyCompression supported: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c8de026c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c8dcfd98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c8de026c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c8dcfd98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c8de026c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c8dcfd98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c8de026c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c8dcfd98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c8de026c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c8dcfd98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c8de026c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c8dcfd98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c8de026c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c8dcfd98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c8de026c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c8dcfd9a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c8de026c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c8dcfd9a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c8de026c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c8dcfd9a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 467b18f0-a2f0-458a-ac30-a187af46dba0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769853516934600, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769853516935668, "job": 1, "event": "recovery_finished"}
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: freelist init
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: freelist _read_cfg
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bluestore(/var/lib/ceph/osd/ceph-2) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bluefs umount
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8ddddb800 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8ddddb800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8ddddb800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8ddddb800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bdev(0x55c8ddddb800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bluefs mount
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bluefs mount shared_bdev_used = 27262976
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: RocksDB version: 7.9.2
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Git sha 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: DB SUMMARY
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: DB Session ID:  DCMGQW6IDRP0BGSN154E
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: CURRENT file:  CURRENT
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                         Options.error_if_exists: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                       Options.create_if_missing: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                                     Options.env: 0x55c8de1f6a80
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                                Options.info_log: 0x55c8de026a20
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                              Options.statistics: (nil)
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                               Options.use_fsync: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                              Options.db_log_dir: 
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                                 Options.wal_dir: db.wal
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                    Options.write_buffer_manager: 0x55c8dd03b900
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.unordered_write: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                               Options.row_cache: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                              Options.wal_filter: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.two_write_queues: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.wal_compression: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.atomic_flush: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.max_background_jobs: 4
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.max_background_compactions: -1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.max_subcompactions: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                          Options.max_open_files: -1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Compression algorithms supported:
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: #011kZSTD supported: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: #011kXpressCompression supported: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: #011kBZip2Compression supported: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: #011kLZ4Compression supported: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: #011kZlibCompression supported: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: #011kSnappyCompression supported: 1
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:36 np0005603787 ceph-osd[87996]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c8de026bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c8dcfd98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c8de026bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c8dcfd98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c8de026bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c8dcfd98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c8de026bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c8dcfd98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c8de026bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c8dcfd98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c8de026bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c8dcfd98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c8de026bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c8dcfd98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c8de0270c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c8dcfd9a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c8de0270c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c8dcfd9a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:           Options.merge_operator: None
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:        Options.compaction_filter: None
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c8de0270c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c8dcfd9a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.compression: LZ4
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:             Options.num_levels: 7
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                           Options.bloom_locality: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                               Options.ttl: 2592000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                       Options.enable_blob_files: false
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                           Options.min_blob_size: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 467b18f0-a2f0-458a-ac30-a187af46dba0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769853516990831, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769853517008038, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769853516, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "467b18f0-a2f0-458a-ac30-a187af46dba0", "db_session_id": "DCMGQW6IDRP0BGSN154E", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769853517041442, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769853517, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "467b18f0-a2f0-458a-ac30-a187af46dba0", "db_session_id": "DCMGQW6IDRP0BGSN154E", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:58:37 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 04:58:37 np0005603787 podman[88433]: 2026-01-31 09:58:37.061970425 +0000 UTC m=+0.066094405 container create d4712e3a251bab18a02edd54ce00dc7c200eabc885835dde1f10189f70b3825f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_morse, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769853517072973, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769853517, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "467b18f0-a2f0-458a-ac30-a187af46dba0", "db_session_id": "DCMGQW6IDRP0BGSN154E", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:58:37 np0005603787 podman[88433]: 2026-01-31 09:58:37.012317886 +0000 UTC m=+0.016441876 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769853517111176, "job": 1, "event": "recovery_finished"}
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 31 04:58:37 np0005603787 systemd[1]: Started libpod-conmon-d4712e3a251bab18a02edd54ce00dc7c200eabc885835dde1f10189f70b3825f.scope.
Jan 31 04:58:37 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:37 np0005603787 podman[88433]: 2026-01-31 09:58:37.19037298 +0000 UTC m=+0.194496980 container init d4712e3a251bab18a02edd54ce00dc7c200eabc885835dde1f10189f70b3825f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_morse, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:58:37 np0005603787 podman[88433]: 2026-01-31 09:58:37.197281197 +0000 UTC m=+0.201405157 container start d4712e3a251bab18a02edd54ce00dc7c200eabc885835dde1f10189f70b3825f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_morse, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:58:37 np0005603787 intelligent_morse[88491]: 167 167
Jan 31 04:58:37 np0005603787 systemd[1]: libpod-d4712e3a251bab18a02edd54ce00dc7c200eabc885835dde1f10189f70b3825f.scope: Deactivated successfully.
Jan 31 04:58:37 np0005603787 ceph-mgr[75453]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3037892145; not ready for session (expect reconnect)
Jan 31 04:58:37 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 04:58:37 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 04:58:37 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 04:58:37 np0005603787 podman[88433]: 2026-01-31 09:58:37.221148955 +0000 UTC m=+0.225272945 container attach d4712e3a251bab18a02edd54ce00dc7c200eabc885835dde1f10189f70b3825f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_morse, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 04:58:37 np0005603787 podman[88433]: 2026-01-31 09:58:37.223952981 +0000 UTC m=+0.228076951 container died d4712e3a251bab18a02edd54ce00dc7c200eabc885835dde1f10189f70b3825f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_morse, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55c8de240000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: DB pointer 0x55c8de1e0000
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.3 total, 0.3 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.3 total, 0.3 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c8dcfd98d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.3 total, 0.3 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c8dcfd98d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.3 total, 0.3 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: _get_class not permitted to load lua
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: _get_class not permitted to load sdk
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: osd.2 0 load_pgs
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: osd.2 0 load_pgs opened 0 pgs
Jan 31 04:58:37 np0005603787 ceph-osd[87996]: osd.2 0 log_to_monitors true
Jan 31 04:58:37 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-2[87992]: 2026-01-31T09:58:37.289+0000 7fa9a35688c0 -1 osd.2 0 log_to_monitors true
Jan 31 04:58:37 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Jan 31 04:58:37 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2313967653,v1:192.168.122.100:6811/2313967653]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Jan 31 04:58:37 np0005603787 systemd[1]: var-lib-containers-storage-overlay-e56b405590d76faa5c513285821a9cd751f1a96c963da752ad1da0dc673f135c-merged.mount: Deactivated successfully.
Jan 31 04:58:37 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Jan 31 04:58:37 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 04:58:37 np0005603787 podman[88433]: 2026-01-31 09:58:37.422659395 +0000 UTC m=+0.426783365 container remove d4712e3a251bab18a02edd54ce00dc7c200eabc885835dde1f10189f70b3825f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_morse, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:58:37 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2313967653,v1:192.168.122.100:6811/2313967653]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 31 04:58:37 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Jan 31 04:58:37 np0005603787 ceph-mon[75160]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/2238925525,v1:192.168.122.100:6803/2238925525] boot
Jan 31 04:58:37 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Jan 31 04:58:37 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 31 04:58:37 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2313967653,v1:192.168.122.100:6811/2313967653]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 31 04:58:37 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e11 create-or-move crush item name 'osd.2' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 31 04:58:37 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 04:58:37 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 04:58:37 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 04:58:37 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 04:58:37 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 04:58:37 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 04:58:37 np0005603787 ceph-osd[85879]: osd.0 11 state: booting -> active
Jan 31 04:58:37 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 04:58:37 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 04:58:37 np0005603787 systemd[1]: libpod-conmon-d4712e3a251bab18a02edd54ce00dc7c200eabc885835dde1f10189f70b3825f.scope: Deactivated successfully.
Jan 31 04:58:37 np0005603787 podman[88549]: 2026-01-31 09:58:37.547207886 +0000 UTC m=+0.044105348 container create fb3973e4d61c46b29076dcf3b462dcef4a51b706a0294579c640e42bfa2952f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_dewdney, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:58:37 np0005603787 systemd[1]: Started libpod-conmon-fb3973e4d61c46b29076dcf3b462dcef4a51b706a0294579c640e42bfa2952f0.scope.
Jan 31 04:58:37 np0005603787 podman[88549]: 2026-01-31 09:58:37.524752827 +0000 UTC m=+0.021650319 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:58:37 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:37 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/425b5b75905709eb21665cd6312bb09de3ec57d0727da8b21acbaf12c0173781/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:37 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/425b5b75905709eb21665cd6312bb09de3ec57d0727da8b21acbaf12c0173781/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:37 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/425b5b75905709eb21665cd6312bb09de3ec57d0727da8b21acbaf12c0173781/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:37 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/425b5b75905709eb21665cd6312bb09de3ec57d0727da8b21acbaf12c0173781/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:37 np0005603787 ceph-mon[75160]: OSD bench result of 2449.711462 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 04:58:37 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:37 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:37 np0005603787 ceph-mon[75160]: from='osd.2 [v2:192.168.122.100:6810/2313967653,v1:192.168.122.100:6811/2313967653]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Jan 31 04:58:37 np0005603787 ceph-mon[75160]: from='osd.2 [v2:192.168.122.100:6810/2313967653,v1:192.168.122.100:6811/2313967653]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 31 04:58:37 np0005603787 ceph-mon[75160]: osd.0 [v2:192.168.122.100:6802/2238925525,v1:192.168.122.100:6803/2238925525] boot
Jan 31 04:58:37 np0005603787 ceph-mon[75160]: from='osd.2 [v2:192.168.122.100:6810/2313967653,v1:192.168.122.100:6811/2313967653]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 31 04:58:37 np0005603787 podman[88549]: 2026-01-31 09:58:37.685376747 +0000 UTC m=+0.182274209 container init fb3973e4d61c46b29076dcf3b462dcef4a51b706a0294579c640e42bfa2952f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:58:37 np0005603787 podman[88549]: 2026-01-31 09:58:37.691552474 +0000 UTC m=+0.188449936 container start fb3973e4d61c46b29076dcf3b462dcef4a51b706a0294579c640e42bfa2952f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_dewdney, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 04:58:37 np0005603787 podman[88549]: 2026-01-31 09:58:37.717180721 +0000 UTC m=+0.214078203 container attach fb3973e4d61c46b29076dcf3b462dcef4a51b706a0294579c640e42bfa2952f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_dewdney, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 04:58:37 np0005603787 ceph-mgr[75453]: [devicehealth INFO root] creating mgr pool
Jan 31 04:58:37 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Jan 31 04:58:37 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Jan 31 04:58:38 np0005603787 ceph-mgr[75453]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3037892145; not ready for session (expect reconnect)
Jan 31 04:58:38 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 04:58:38 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 04:58:38 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 04:58:38 np0005603787 lvm[88644]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 04:58:38 np0005603787 lvm[88644]: VG ceph_vg0 finished
Jan 31 04:58:38 np0005603787 lvm[88646]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 04:58:38 np0005603787 lvm[88646]: VG ceph_vg1 finished
Jan 31 04:58:38 np0005603787 lvm[88647]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 04:58:38 np0005603787 lvm[88647]: VG ceph_vg2 finished
Jan 31 04:58:38 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 31 04:58:38 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 31 04:58:38 np0005603787 adoring_dewdney[88566]: {}
Jan 31 04:58:38 np0005603787 systemd[1]: libpod-fb3973e4d61c46b29076dcf3b462dcef4a51b706a0294579c640e42bfa2952f0.scope: Deactivated successfully.
Jan 31 04:58:38 np0005603787 podman[88549]: 2026-01-31 09:58:38.368521371 +0000 UTC m=+0.865418833 container died fb3973e4d61c46b29076dcf3b462dcef4a51b706a0294579c640e42bfa2952f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_dewdney, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:58:38 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Jan 31 04:58:38 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e11 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 04:58:38 np0005603787 systemd[1]: var-lib-containers-storage-overlay-425b5b75905709eb21665cd6312bb09de3ec57d0727da8b21acbaf12c0173781-merged.mount: Deactivated successfully.
Jan 31 04:58:38 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2313967653,v1:192.168.122.100:6811/2313967653]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 31 04:58:38 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 31 04:58:38 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Jan 31 04:58:38 np0005603787 ceph-osd[87996]: osd.2 0 done with init, starting boot process
Jan 31 04:58:38 np0005603787 ceph-osd[87996]: osd.2 0 start_boot
Jan 31 04:58:38 np0005603787 ceph-osd[87996]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 31 04:58:38 np0005603787 ceph-osd[87996]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 31 04:58:38 np0005603787 ceph-osd[87996]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 31 04:58:38 np0005603787 ceph-osd[87996]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 31 04:58:38 np0005603787 ceph-osd[87996]: osd.2 0  bench count 12288000 bsize 4 KiB
Jan 31 04:58:38 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e12 crush map has features 3314933000852226048, adjusting msgr requires
Jan 31 04:58:38 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Jan 31 04:58:38 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Jan 31 04:58:38 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Jan 31 04:58:38 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Jan 31 04:58:38 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 04:58:38 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 04:58:38 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 04:58:38 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 04:58:38 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 04:58:38 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 04:58:38 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Jan 31 04:58:38 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Jan 31 04:58:38 np0005603787 ceph-mgr[75453]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2313967653; not ready for session (expect reconnect)
Jan 31 04:58:38 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 04:58:38 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 04:58:38 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 04:58:38 np0005603787 ceph-osd[85879]: osd.0 12 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 31 04:58:38 np0005603787 ceph-osd[85879]: osd.0 12 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Jan 31 04:58:38 np0005603787 ceph-osd[85879]: osd.0 12 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 31 04:58:38 np0005603787 podman[88549]: 2026-01-31 09:58:38.614132468 +0000 UTC m=+1.111029930 container remove fb3973e4d61c46b29076dcf3b462dcef4a51b706a0294579c640e42bfa2952f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:58:38 np0005603787 systemd[1]: libpod-conmon-fb3973e4d61c46b29076dcf3b462dcef4a51b706a0294579c640e42bfa2952f0.scope: Deactivated successfully.
Jan 31 04:58:38 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 04:58:38 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:38 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 04:58:38 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:38 np0005603787 ceph-osd[86934]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 17.349 iops: 4441.233 elapsed_sec: 0.675
Jan 31 04:58:38 np0005603787 ceph-osd[86934]: log_channel(cluster) log [WRN] : OSD bench result of 4441.232959 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 04:58:38 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-1[86930]: 2026-01-31T09:58:38.771+0000 7fd8d4be8640 -1 osd.1 0 waiting for initial osdmap
Jan 31 04:58:38 np0005603787 ceph-osd[86934]: osd.1 0 waiting for initial osdmap
Jan 31 04:58:38 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Jan 31 04:58:38 np0005603787 ceph-mon[75160]: from='osd.2 [v2:192.168.122.100:6810/2313967653,v1:192.168.122.100:6811/2313967653]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 31 04:58:38 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 31 04:58:38 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Jan 31 04:58:38 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:38 np0005603787 ceph-osd[86934]: osd.1 12 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 31 04:58:38 np0005603787 ceph-osd[86934]: osd.1 12 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Jan 31 04:58:38 np0005603787 ceph-osd[86934]: osd.1 12 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 31 04:58:38 np0005603787 ceph-osd[86934]: osd.1 12 check_osdmap_features require_osd_release unknown -> tentacle
Jan 31 04:58:38 np0005603787 ceph-osd[86934]: osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 31 04:58:38 np0005603787 ceph-osd[86934]: osd.1 12 set_numa_affinity not setting numa affinity
Jan 31 04:58:38 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-1[86930]: 2026-01-31T09:58:38.856+0000 7fd8cf9ed640 -1 osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 31 04:58:38 np0005603787 ceph-osd[86934]: osd.1 12 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial no unique device path for loop4: no symlink to loop4 in /dev/disk/by-path
Jan 31 04:58:39 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v31: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 31 04:58:39 np0005603787 ceph-mgr[75453]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3037892145; not ready for session (expect reconnect)
Jan 31 04:58:39 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 04:58:39 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 04:58:39 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 04:58:39 np0005603787 podman[88783]: 2026-01-31 09:58:39.305211148 +0000 UTC m=+0.134389839 container exec 1cb6a2ad0c52f65a03512fc45c5f9abf84541c639633c47899a99e7122aa7891 (image=quay.io/ceph/ceph:v20, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:58:39 np0005603787 podman[88783]: 2026-01-31 09:58:39.427491367 +0000 UTC m=+0.256670038 container exec_died 1cb6a2ad0c52f65a03512fc45c5f9abf84541c639633c47899a99e7122aa7891 (image=quay.io/ceph/ceph:v20, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 04:58:39 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Jan 31 04:58:39 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 31 04:58:39 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e13 e13: 3 total, 2 up, 3 in
Jan 31 04:58:39 np0005603787 ceph-mgr[75453]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2313967653; not ready for session (expect reconnect)
Jan 31 04:58:39 np0005603787 ceph-mon[75160]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/3037892145,v1:192.168.122.100:6807/3037892145] boot
Jan 31 04:58:39 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 2 up, 3 in
Jan 31 04:58:39 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 04:58:39 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 04:58:39 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 04:58:39 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 04:58:39 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 04:58:39 np0005603787 ceph-osd[86934]: osd.1 13 state: booting -> active
Jan 31 04:58:39 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=0/0 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[12,13)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:58:39 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:39 np0005603787 ceph-mon[75160]: OSD bench result of 4441.232959 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 04:58:39 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 31 04:58:39 np0005603787 ceph-mon[75160]: osd.1 [v2:192.168.122.100:6806/3037892145,v1:192.168.122.100:6807/3037892145] boot
Jan 31 04:58:39 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 04:58:40 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:40 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 04:58:40 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:40 np0005603787 podman[88997]: 2026-01-31 09:58:40.402669279 +0000 UTC m=+0.053845274 container create 36ba1861f4a4d25bae9fe7f49c8bd846e1ceab30441b9c84b078fcffe4c21a8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 04:58:40 np0005603787 systemd[1]: Started libpod-conmon-36ba1861f4a4d25bae9fe7f49c8bd846e1ceab30441b9c84b078fcffe4c21a8e.scope.
Jan 31 04:58:40 np0005603787 podman[88997]: 2026-01-31 09:58:40.366396574 +0000 UTC m=+0.017572599 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:58:40 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:40 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Jan 31 04:58:40 np0005603787 python3[89031]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:58:40 np0005603787 ceph-mgr[75453]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2313967653; not ready for session (expect reconnect)
Jan 31 04:58:40 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 04:58:40 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 04:58:40 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 04:58:40 np0005603787 podman[88997]: 2026-01-31 09:58:40.520383554 +0000 UTC m=+0.171559579 container init 36ba1861f4a4d25bae9fe7f49c8bd846e1ceab30441b9c84b078fcffe4c21a8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 04:58:40 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Jan 31 04:58:40 np0005603787 podman[88997]: 2026-01-31 09:58:40.527138557 +0000 UTC m=+0.178314552 container start 36ba1861f4a4d25bae9fe7f49c8bd846e1ceab30441b9c84b078fcffe4c21a8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_banach, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:58:40 np0005603787 musing_banach[89034]: 167 167
Jan 31 04:58:40 np0005603787 systemd[1]: libpod-36ba1861f4a4d25bae9fe7f49c8bd846e1ceab30441b9c84b078fcffe4c21a8e.scope: Deactivated successfully.
Jan 31 04:58:40 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Jan 31 04:58:40 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 04:58:40 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 04:58:40 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 04:58:40 np0005603787 podman[88997]: 2026-01-31 09:58:40.559803374 +0000 UTC m=+0.210979379 container attach 36ba1861f4a4d25bae9fe7f49c8bd846e1ceab30441b9c84b078fcffe4c21a8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_banach, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:58:40 np0005603787 podman[88997]: 2026-01-31 09:58:40.560789461 +0000 UTC m=+0.211965466 container died 36ba1861f4a4d25bae9fe7f49c8bd846e1ceab30441b9c84b078fcffe4c21a8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_banach, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Jan 31 04:58:40 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 14 pg[1.0( empty local-lis/les=13/14 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[12,13)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:58:40 np0005603787 systemd[1]: var-lib-containers-storage-overlay-4350066b7510ca4a85edcd32789fe60004c5be427fe62f2e0228b747a169eb96-merged.mount: Deactivated successfully.
Jan 31 04:58:40 np0005603787 ceph-mgr[75453]: [devicehealth INFO root] creating main.db for devicehealth
Jan 31 04:58:40 np0005603787 podman[89038]: 2026-01-31 09:58:40.649429787 +0000 UTC m=+0.129236739 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:58:40 np0005603787 podman[88997]: 2026-01-31 09:58:40.81753824 +0000 UTC m=+0.468714245 container remove 36ba1861f4a4d25bae9fe7f49c8bd846e1ceab30441b9c84b078fcffe4c21a8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:58:40 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:40 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:40 np0005603787 systemd[1]: libpod-conmon-36ba1861f4a4d25bae9fe7f49c8bd846e1ceab30441b9c84b078fcffe4c21a8e.scope: Deactivated successfully.
Jan 31 04:58:40 np0005603787 podman[89038]: 2026-01-31 09:58:40.87056906 +0000 UTC m=+0.350375992 container create 28ac0be0b131287bf18ae6601156262ce329b5804fa62ddc5ebf8a7599196807 (image=quay.io/ceph/ceph:v20, name=focused_mclaren, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 04:58:40 np0005603787 ceph-mgr[75453]: [devicehealth INFO root] Check health
Jan 31 04:58:40 np0005603787 systemd[1]: Started libpod-conmon-28ac0be0b131287bf18ae6601156262ce329b5804fa62ddc5ebf8a7599196807.scope.
Jan 31 04:58:40 np0005603787 ceph-mgr[75453]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Jan 31 04:58:40 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 31 04:58:40 np0005603787 podman[89074]: 2026-01-31 09:58:40.961850167 +0000 UTC m=+0.064536283 container create 80d4517cf5c623cba1fb6dfdddf800384c46b83b9dee3d59834bd090a53c8ec4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_brahmagupta, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:58:40 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:40 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae600e53ec1b05fd2e4b612dce5027bf8259721968759dc65c5f0bc51f2c6d83/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:40 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae600e53ec1b05fd2e4b612dce5027bf8259721968759dc65c5f0bc51f2c6d83/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:40 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae600e53ec1b05fd2e4b612dce5027bf8259721968759dc65c5f0bc51f2c6d83/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:40 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 31 04:58:40 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 31 04:58:40 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 31 04:58:41 np0005603787 systemd[1]: Started libpod-conmon-80d4517cf5c623cba1fb6dfdddf800384c46b83b9dee3d59834bd090a53c8ec4.scope.
Jan 31 04:58:41 np0005603787 podman[89074]: 2026-01-31 09:58:40.920260878 +0000 UTC m=+0.022947014 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:58:41 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:41 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8143f421f375a7beeaac7bd04149c591d1d7b277ee1f16f8890bb65465f10ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:41 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8143f421f375a7beeaac7bd04149c591d1d7b277ee1f16f8890bb65465f10ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:41 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8143f421f375a7beeaac7bd04149c591d1d7b277ee1f16f8890bb65465f10ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:41 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8143f421f375a7beeaac7bd04149c591d1d7b277ee1f16f8890bb65465f10ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:41 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v34: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 31 04:58:41 np0005603787 podman[89038]: 2026-01-31 09:58:41.051411018 +0000 UTC m=+0.531217980 container init 28ac0be0b131287bf18ae6601156262ce329b5804fa62ddc5ebf8a7599196807 (image=quay.io/ceph/ceph:v20, name=focused_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 04:58:41 np0005603787 podman[89038]: 2026-01-31 09:58:41.059527399 +0000 UTC m=+0.539334331 container start 28ac0be0b131287bf18ae6601156262ce329b5804fa62ddc5ebf8a7599196807 (image=quay.io/ceph/ceph:v20, name=focused_mclaren, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:58:41 np0005603787 podman[89074]: 2026-01-31 09:58:41.082222175 +0000 UTC m=+0.184908311 container init 80d4517cf5c623cba1fb6dfdddf800384c46b83b9dee3d59834bd090a53c8ec4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_brahmagupta, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True)
Jan 31 04:58:41 np0005603787 podman[89074]: 2026-01-31 09:58:41.086579303 +0000 UTC m=+0.189265419 container start 80d4517cf5c623cba1fb6dfdddf800384c46b83b9dee3d59834bd090a53c8ec4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_brahmagupta, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:58:41 np0005603787 podman[89074]: 2026-01-31 09:58:41.118471979 +0000 UTC m=+0.221158105 container attach 80d4517cf5c623cba1fb6dfdddf800384c46b83b9dee3d59834bd090a53c8ec4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:58:41 np0005603787 podman[89038]: 2026-01-31 09:58:41.14759473 +0000 UTC m=+0.627401662 container attach 28ac0be0b131287bf18ae6601156262ce329b5804fa62ddc5ebf8a7599196807 (image=quay.io/ceph/ceph:v20, name=focused_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:58:41 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 04:58:41 np0005603787 ceph-mgr[75453]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2313967653; not ready for session (expect reconnect)
Jan 31 04:58:41 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 04:58:41 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 04:58:41 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 04:58:41 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Jan 31 04:58:41 np0005603787 flamboyant_brahmagupta[89109]: [
Jan 31 04:58:41 np0005603787 flamboyant_brahmagupta[89109]:    {
Jan 31 04:58:41 np0005603787 flamboyant_brahmagupta[89109]:        "available": false,
Jan 31 04:58:41 np0005603787 flamboyant_brahmagupta[89109]:        "being_replaced": false,
Jan 31 04:58:41 np0005603787 flamboyant_brahmagupta[89109]:        "ceph_device_lvm": false,
Jan 31 04:58:41 np0005603787 flamboyant_brahmagupta[89109]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 04:58:41 np0005603787 flamboyant_brahmagupta[89109]:        "lsm_data": {},
Jan 31 04:58:41 np0005603787 flamboyant_brahmagupta[89109]:        "lvs": [],
Jan 31 04:58:41 np0005603787 flamboyant_brahmagupta[89109]:        "path": "/dev/sr0",
Jan 31 04:58:41 np0005603787 flamboyant_brahmagupta[89109]:        "rejected_reasons": [
Jan 31 04:58:41 np0005603787 flamboyant_brahmagupta[89109]:            "Has a FileSystem",
Jan 31 04:58:41 np0005603787 flamboyant_brahmagupta[89109]:            "Insufficient space (<5GB)"
Jan 31 04:58:41 np0005603787 flamboyant_brahmagupta[89109]:        ],
Jan 31 04:58:41 np0005603787 flamboyant_brahmagupta[89109]:        "sys_api": {
Jan 31 04:58:41 np0005603787 flamboyant_brahmagupta[89109]:            "actuators": null,
Jan 31 04:58:41 np0005603787 flamboyant_brahmagupta[89109]:            "device_nodes": [
Jan 31 04:58:41 np0005603787 flamboyant_brahmagupta[89109]:                "sr0"
Jan 31 04:58:41 np0005603787 flamboyant_brahmagupta[89109]:            ],
Jan 31 04:58:41 np0005603787 flamboyant_brahmagupta[89109]:            "devname": "sr0",
Jan 31 04:58:41 np0005603787 flamboyant_brahmagupta[89109]:            "human_readable_size": "482.00 KB",
Jan 31 04:58:41 np0005603787 flamboyant_brahmagupta[89109]:            "id_bus": "ata",
Jan 31 04:58:41 np0005603787 flamboyant_brahmagupta[89109]:            "model": "QEMU DVD-ROM",
Jan 31 04:58:41 np0005603787 flamboyant_brahmagupta[89109]:            "nr_requests": "2",
Jan 31 04:58:41 np0005603787 flamboyant_brahmagupta[89109]:            "parent": "/dev/sr0",
Jan 31 04:58:41 np0005603787 flamboyant_brahmagupta[89109]:            "partitions": {},
Jan 31 04:58:41 np0005603787 flamboyant_brahmagupta[89109]:            "path": "/dev/sr0",
Jan 31 04:58:41 np0005603787 flamboyant_brahmagupta[89109]:            "removable": "1",
Jan 31 04:58:41 np0005603787 flamboyant_brahmagupta[89109]:            "rev": "2.5+",
Jan 31 04:58:41 np0005603787 flamboyant_brahmagupta[89109]:            "ro": "0",
Jan 31 04:58:41 np0005603787 flamboyant_brahmagupta[89109]:            "rotational": "1",
Jan 31 04:58:41 np0005603787 flamboyant_brahmagupta[89109]:            "sas_address": "",
Jan 31 04:58:41 np0005603787 flamboyant_brahmagupta[89109]:            "sas_device_handle": "",
Jan 31 04:58:41 np0005603787 flamboyant_brahmagupta[89109]:            "scheduler_mode": "mq-deadline",
Jan 31 04:58:41 np0005603787 flamboyant_brahmagupta[89109]:            "sectors": 0,
Jan 31 04:58:41 np0005603787 flamboyant_brahmagupta[89109]:            "sectorsize": "2048",
Jan 31 04:58:41 np0005603787 flamboyant_brahmagupta[89109]:            "size": 493568.0,
Jan 31 04:58:41 np0005603787 flamboyant_brahmagupta[89109]:            "support_discard": "2048",
Jan 31 04:58:41 np0005603787 flamboyant_brahmagupta[89109]:            "type": "disk",
Jan 31 04:58:41 np0005603787 flamboyant_brahmagupta[89109]:            "vendor": "QEMU"
Jan 31 04:58:41 np0005603787 flamboyant_brahmagupta[89109]:        }
Jan 31 04:58:41 np0005603787 flamboyant_brahmagupta[89109]:    }
Jan 31 04:58:41 np0005603787 flamboyant_brahmagupta[89109]: ]
Jan 31 04:58:41 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Jan 31 04:58:41 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Jan 31 04:58:41 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 04:58:41 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 04:58:41 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 04:58:41 np0005603787 systemd[1]: libpod-80d4517cf5c623cba1fb6dfdddf800384c46b83b9dee3d59834bd090a53c8ec4.scope: Deactivated successfully.
Jan 31 04:58:41 np0005603787 podman[89074]: 2026-01-31 09:58:41.553456847 +0000 UTC m=+0.656142953 container died 80d4517cf5c623cba1fb6dfdddf800384c46b83b9dee3d59834bd090a53c8ec4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_brahmagupta, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:58:41 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 31 04:58:41 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1821860743' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 31 04:58:41 np0005603787 focused_mclaren[89097]: 
Jan 31 04:58:41 np0005603787 focused_mclaren[89097]: {"fsid":"962d77ae-dc67-5de8-89d8-3d1670c67b61","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":80,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":15,"num_osds":3,"num_up_osds":2,"osd_up_since":1769853519,"num_in_osds":3,"osd_in_since":1769853497,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"unknown","count":1}],"num_pgs":1,"num_pools":1,"num_objects":0,"data_bytes":0,"bytes_used":447000576,"bytes_avail":21023641600,"bytes_total":21470642176,"unknown_pgs_ratio":1},"fsmap":{"epoch":1,"btime":"2026-01-31T09:57:19:176309+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-01-31T09:57:19.182834+0000","services":{}},"progress_events":{}}
Jan 31 04:58:41 np0005603787 systemd[1]: libpod-28ac0be0b131287bf18ae6601156262ce329b5804fa62ddc5ebf8a7599196807.scope: Deactivated successfully.
Jan 31 04:58:42 np0005603787 systemd[1]: var-lib-containers-storage-overlay-e8143f421f375a7beeaac7bd04149c591d1d7b277ee1f16f8890bb65465f10ed-merged.mount: Deactivated successfully.
Jan 31 04:58:42 np0005603787 ceph-mgr[75453]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2313967653; not ready for session (expect reconnect)
Jan 31 04:58:42 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 04:58:42 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 04:58:42 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 04:58:42 np0005603787 ceph-mon[75160]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 31 04:58:42 np0005603787 ceph-mon[75160]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 31 04:58:42 np0005603787 podman[89074]: 2026-01-31 09:58:42.744030535 +0000 UTC m=+1.846716651 container remove 80d4517cf5c623cba1fb6dfdddf800384c46b83b9dee3d59834bd090a53c8ec4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_brahmagupta, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:58:42 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.mdmqaq(active, since 59s)
Jan 31 04:58:42 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 04:58:42 np0005603787 systemd[1]: libpod-conmon-80d4517cf5c623cba1fb6dfdddf800384c46b83b9dee3d59834bd090a53c8ec4.scope: Deactivated successfully.
Jan 31 04:58:42 np0005603787 podman[89038]: 2026-01-31 09:58:42.818498976 +0000 UTC m=+2.298305928 container died 28ac0be0b131287bf18ae6601156262ce329b5804fa62ddc5ebf8a7599196807 (image=quay.io/ceph/ceph:v20, name=focused_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:58:43 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v36: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 31 04:58:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_09:58:43
Jan 31 04:58:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 04:58:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Some PGs (1.000000) are inactive; try again later
Jan 31 04:58:43 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 04:58:43 np0005603787 ceph-mgr[75453]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2313967653; not ready for session (expect reconnect)
Jan 31 04:58:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 04:58:43 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 04:58:43 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 04:58:43 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:43 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 04:58:43 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Jan 31 04:58:43 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 1 (current 1)
Jan 31 04:58:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 04:58:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:58:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:58:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 04:58:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:58:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:58:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:58:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:58:43 np0005603787 systemd[1]: var-lib-containers-storage-overlay-ae600e53ec1b05fd2e4b612dce5027bf8259721968759dc65c5f0bc51f2c6d83-merged.mount: Deactivated successfully.
Jan 31 04:58:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Jan 31 04:58:43 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Jan 31 04:58:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Jan 31 04:58:43 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Jan 31 04:58:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Jan 31 04:58:43 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Jan 31 04:58:43 np0005603787 ceph-mgr[75453]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43684k
Jan 31 04:58:43 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43684k
Jan 31 04:58:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Jan 31 04:58:43 np0005603787 ceph-mgr[75453]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44732552: error parsing value: Value '44732552' is below minimum 939524096
Jan 31 04:58:43 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44732552: error parsing value: Value '44732552' is below minimum 939524096
Jan 31 04:58:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 04:58:43 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 04:58:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 04:58:43 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 04:58:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 04:58:44 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:44 np0005603787 ceph-mgr[75453]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2313967653; not ready for session (expect reconnect)
Jan 31 04:58:45 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v37: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 31 04:58:45 np0005603787 podman[89038]: 2026-01-31 09:58:45.180906973 +0000 UTC m=+4.660713905 container remove 28ac0be0b131287bf18ae6601156262ce329b5804fa62ddc5ebf8a7599196807 (image=quay.io/ceph/ceph:v20, name=focused_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 04:58:45 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 04:58:45 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 04:58:45 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 04:58:45 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:45 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 04:58:45 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 04:58:45 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 04:58:45 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 04:58:45 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 04:58:45 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 04:58:45 np0005603787 systemd[1]: libpod-conmon-28ac0be0b131287bf18ae6601156262ce329b5804fa62ddc5ebf8a7599196807.scope: Deactivated successfully.
Jan 31 04:58:45 np0005603787 ceph-mgr[75453]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2313967653; not ready for session (expect reconnect)
Jan 31 04:58:45 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 04:58:45 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 04:58:45 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 04:58:45 np0005603787 python3[89947]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:58:45 np0005603787 podman[89961]: 2026-01-31 09:58:45.584933891 +0000 UTC m=+0.019517051 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:58:45 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:45 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Jan 31 04:58:45 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Jan 31 04:58:45 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Jan 31 04:58:45 np0005603787 ceph-mon[75160]: Adjusting osd_memory_target on compute-0 to 43684k
Jan 31 04:58:45 np0005603787 ceph-mon[75160]: Unable to set osd_memory_target on compute-0 to 44732552: error parsing value: Value '44732552' is below minimum 939524096
Jan 31 04:58:45 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 04:58:45 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:45 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 04:58:45 np0005603787 podman[89961]: 2026-01-31 09:58:45.737258856 +0000 UTC m=+0.171842006 container create 5f90690d0b88913188f398dc47ea70e97aab40dd602243af6bea0aade57b3856 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_johnson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:58:45 np0005603787 systemd[1]: Started libpod-conmon-5f90690d0b88913188f398dc47ea70e97aab40dd602243af6bea0aade57b3856.scope.
Jan 31 04:58:45 np0005603787 podman[89975]: 2026-01-31 09:58:45.744893143 +0000 UTC m=+0.076363474 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:58:45 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:45 np0005603787 podman[89975]: 2026-01-31 09:58:45.863577315 +0000 UTC m=+0.195047596 container create 92a0150859f963de7b3409f90093ab31cd36941bd4391c3c6d2988355fd77a08 (image=quay.io/ceph/ceph:v20, name=infallible_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 04:58:45 np0005603787 systemd[1]: Started libpod-conmon-92a0150859f963de7b3409f90093ab31cd36941bd4391c3c6d2988355fd77a08.scope.
Jan 31 04:58:45 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:45 np0005603787 podman[89961]: 2026-01-31 09:58:45.958402389 +0000 UTC m=+0.392985549 container init 5f90690d0b88913188f398dc47ea70e97aab40dd602243af6bea0aade57b3856 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_johnson, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:58:45 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a40c6a1cc7b5838502790e5552da5a77ab621aee2241745da42a227c25713d1c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:45 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a40c6a1cc7b5838502790e5552da5a77ab621aee2241745da42a227c25713d1c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:45 np0005603787 podman[89961]: 2026-01-31 09:58:45.966177789 +0000 UTC m=+0.400760929 container start 5f90690d0b88913188f398dc47ea70e97aab40dd602243af6bea0aade57b3856 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_johnson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 04:58:45 np0005603787 unruffled_johnson[89991]: 167 167
Jan 31 04:58:45 np0005603787 systemd[1]: libpod-5f90690d0b88913188f398dc47ea70e97aab40dd602243af6bea0aade57b3856.scope: Deactivated successfully.
Jan 31 04:58:46 np0005603787 podman[89961]: 2026-01-31 09:58:46.020215007 +0000 UTC m=+0.454798147 container attach 5f90690d0b88913188f398dc47ea70e97aab40dd602243af6bea0aade57b3856 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_johnson, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 04:58:46 np0005603787 podman[89961]: 2026-01-31 09:58:46.020463473 +0000 UTC m=+0.455046613 container died 5f90690d0b88913188f398dc47ea70e97aab40dd602243af6bea0aade57b3856 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_johnson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True)
Jan 31 04:58:46 np0005603787 systemd[1]: var-lib-containers-storage-overlay-f7dfc240b5c9c3339d614f8e30e3b2c41613c323f8f73b1dd14dde20f96bd597-merged.mount: Deactivated successfully.
Jan 31 04:58:46 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 04:58:46 np0005603787 podman[89961]: 2026-01-31 09:58:46.238985085 +0000 UTC m=+0.673568225 container remove 5f90690d0b88913188f398dc47ea70e97aab40dd602243af6bea0aade57b3856 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:58:46 np0005603787 systemd[1]: libpod-conmon-5f90690d0b88913188f398dc47ea70e97aab40dd602243af6bea0aade57b3856.scope: Deactivated successfully.
Jan 31 04:58:46 np0005603787 podman[89975]: 2026-01-31 09:58:46.338443755 +0000 UTC m=+0.669914066 container init 92a0150859f963de7b3409f90093ab31cd36941bd4391c3c6d2988355fd77a08 (image=quay.io/ceph/ceph:v20, name=infallible_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:58:46 np0005603787 podman[89975]: 2026-01-31 09:58:46.344815738 +0000 UTC m=+0.676286059 container start 92a0150859f963de7b3409f90093ab31cd36941bd4391c3c6d2988355fd77a08 (image=quay.io/ceph/ceph:v20, name=infallible_lumiere, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True)
Jan 31 04:58:46 np0005603787 podman[89975]: 2026-01-31 09:58:46.386475768 +0000 UTC m=+0.717946049 container attach 92a0150859f963de7b3409f90093ab31cd36941bd4391c3c6d2988355fd77a08 (image=quay.io/ceph/ceph:v20, name=infallible_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:58:46 np0005603787 podman[90022]: 2026-01-31 09:58:46.423860243 +0000 UTC m=+0.102930254 container create a0f6a0ac878d226cc9c64418f1a2a5f9727ae15446b2452607eaf2540ccd5119 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_pare, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 04:58:46 np0005603787 podman[90022]: 2026-01-31 09:58:46.351054487 +0000 UTC m=+0.030124518 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:58:46 np0005603787 systemd[1]: Started libpod-conmon-a0f6a0ac878d226cc9c64418f1a2a5f9727ae15446b2452607eaf2540ccd5119.scope.
Jan 31 04:58:46 np0005603787 ceph-mgr[75453]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2313967653; not ready for session (expect reconnect)
Jan 31 04:58:46 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 04:58:46 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 04:58:46 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 04:58:46 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:46 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fac0d1fd9ca50cb508d4a8f44f7c6b3e3a4997d56f4e80ef80ffd9126fcd04e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:46 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fac0d1fd9ca50cb508d4a8f44f7c6b3e3a4997d56f4e80ef80ffd9126fcd04e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:46 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fac0d1fd9ca50cb508d4a8f44f7c6b3e3a4997d56f4e80ef80ffd9126fcd04e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:46 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fac0d1fd9ca50cb508d4a8f44f7c6b3e3a4997d56f4e80ef80ffd9126fcd04e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:46 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fac0d1fd9ca50cb508d4a8f44f7c6b3e3a4997d56f4e80ef80ffd9126fcd04e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:46 np0005603787 podman[90022]: 2026-01-31 09:58:46.580775833 +0000 UTC m=+0.259845834 container init a0f6a0ac878d226cc9c64418f1a2a5f9727ae15446b2452607eaf2540ccd5119 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_pare, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 04:58:46 np0005603787 podman[90022]: 2026-01-31 09:58:46.58690963 +0000 UTC m=+0.265979641 container start a0f6a0ac878d226cc9c64418f1a2a5f9727ae15446b2452607eaf2540ccd5119 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:58:46 np0005603787 podman[90022]: 2026-01-31 09:58:46.603273903 +0000 UTC m=+0.282343904 container attach a0f6a0ac878d226cc9c64418f1a2a5f9727ae15446b2452607eaf2540ccd5119 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:58:46 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 31 04:58:46 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1280573327' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 04:58:46 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Jan 31 04:58:46 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1280573327' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 04:58:46 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Jan 31 04:58:46 np0005603787 infallible_lumiere[89996]: pool 'vms' created
Jan 31 04:58:46 np0005603787 ceph-osd[87996]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 15.947 iops: 4082.355 elapsed_sec: 0.735
Jan 31 04:58:46 np0005603787 ceph-osd[87996]: log_channel(cluster) log [WRN] : OSD bench result of 4082.355380 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 04:58:46 np0005603787 ceph-osd[87996]: osd.2 0 waiting for initial osdmap
Jan 31 04:58:46 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Jan 31 04:58:46 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 04:58:46 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 04:58:46 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-2[87992]: 2026-01-31T09:58:46.840+0000 7fa99fcfc640 -1 osd.2 0 waiting for initial osdmap
Jan 31 04:58:46 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 04:58:46 np0005603787 systemd[1]: libpod-92a0150859f963de7b3409f90093ab31cd36941bd4391c3c6d2988355fd77a08.scope: Deactivated successfully.
Jan 31 04:58:46 np0005603787 podman[89975]: 2026-01-31 09:58:46.859964932 +0000 UTC m=+1.191435213 container died 92a0150859f963de7b3409f90093ab31cd36941bd4391c3c6d2988355fd77a08 (image=quay.io/ceph/ceph:v20, name=infallible_lumiere, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 04:58:46 np0005603787 ceph-osd[87996]: osd.2 16 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 31 04:58:46 np0005603787 ceph-osd[87996]: osd.2 16 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Jan 31 04:58:46 np0005603787 ceph-osd[87996]: osd.2 16 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 31 04:58:46 np0005603787 ceph-osd[87996]: osd.2 16 check_osdmap_features require_osd_release unknown -> tentacle
Jan 31 04:58:46 np0005603787 ceph-osd[87996]: osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 31 04:58:46 np0005603787 ceph-osd[87996]: osd.2 16 set_numa_affinity not setting numa affinity
Jan 31 04:58:46 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-osd-2[87992]: 2026-01-31T09:58:46.901+0000 7fa99a2ef640 -1 osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 31 04:58:46 np0005603787 ceph-osd[87996]: osd.2 16 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial no unique device path for loop5: no symlink to loop5 in /dev/disk/by-path
Jan 31 04:58:46 np0005603787 systemd[1]: var-lib-containers-storage-overlay-a40c6a1cc7b5838502790e5552da5a77ab621aee2241745da42a227c25713d1c-merged.mount: Deactivated successfully.
Jan 31 04:58:46 np0005603787 podman[89975]: 2026-01-31 09:58:46.932020227 +0000 UTC m=+1.263490508 container remove 92a0150859f963de7b3409f90093ab31cd36941bd4391c3c6d2988355fd77a08 (image=quay.io/ceph/ceph:v20, name=infallible_lumiere, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:58:46 np0005603787 systemd[1]: libpod-conmon-92a0150859f963de7b3409f90093ab31cd36941bd4391c3c6d2988355fd77a08.scope: Deactivated successfully.
Jan 31 04:58:46 np0005603787 funny_pare[90058]: --> passed data devices: 0 physical, 3 LVM
Jan 31 04:58:46 np0005603787 funny_pare[90058]: --> All data devices are unavailable
Jan 31 04:58:47 np0005603787 systemd[1]: libpod-a0f6a0ac878d226cc9c64418f1a2a5f9727ae15446b2452607eaf2540ccd5119.scope: Deactivated successfully.
Jan 31 04:58:47 np0005603787 podman[90022]: 2026-01-31 09:58:47.00836848 +0000 UTC m=+0.687438491 container died a0f6a0ac878d226cc9c64418f1a2a5f9727ae15446b2452607eaf2540ccd5119 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:58:47 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v39: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 31 04:58:47 np0005603787 systemd[1]: var-lib-containers-storage-overlay-5fac0d1fd9ca50cb508d4a8f44f7c6b3e3a4997d56f4e80ef80ffd9126fcd04e-merged.mount: Deactivated successfully.
Jan 31 04:58:47 np0005603787 podman[90022]: 2026-01-31 09:58:47.16235619 +0000 UTC m=+0.841426191 container remove a0f6a0ac878d226cc9c64418f1a2a5f9727ae15446b2452607eaf2540ccd5119 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:58:47 np0005603787 systemd[1]: libpod-conmon-a0f6a0ac878d226cc9c64418f1a2a5f9727ae15446b2452607eaf2540ccd5119.scope: Deactivated successfully.
Jan 31 04:58:47 np0005603787 python3[90130]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:58:47 np0005603787 podman[90133]: 2026-01-31 09:58:47.29203826 +0000 UTC m=+0.068750727 container create 5de06edc6da73bbe27dc5bd378861b53dbdd92a03eafbc48c7a2afeb9c1ee733 (image=quay.io/ceph/ceph:v20, name=sweet_poincare, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:58:47 np0005603787 systemd[1]: Started libpod-conmon-5de06edc6da73bbe27dc5bd378861b53dbdd92a03eafbc48c7a2afeb9c1ee733.scope.
Jan 31 04:58:47 np0005603787 podman[90133]: 2026-01-31 09:58:47.247311206 +0000 UTC m=+0.024023713 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:58:47 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:47 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e2d43b547f0f63e7e22becd35216cf599b198dc3df29858df84e9437c760ef8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:47 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e2d43b547f0f63e7e22becd35216cf599b198dc3df29858df84e9437c760ef8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:47 np0005603787 podman[90133]: 2026-01-31 09:58:47.389521787 +0000 UTC m=+0.166234254 container init 5de06edc6da73bbe27dc5bd378861b53dbdd92a03eafbc48c7a2afeb9c1ee733 (image=quay.io/ceph/ceph:v20, name=sweet_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:58:47 np0005603787 podman[90133]: 2026-01-31 09:58:47.39593279 +0000 UTC m=+0.172645247 container start 5de06edc6da73bbe27dc5bd378861b53dbdd92a03eafbc48c7a2afeb9c1ee733 (image=quay.io/ceph/ceph:v20, name=sweet_poincare, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 04:58:47 np0005603787 podman[90133]: 2026-01-31 09:58:47.406018515 +0000 UTC m=+0.182731002 container attach 5de06edc6da73bbe27dc5bd378861b53dbdd92a03eafbc48c7a2afeb9c1ee733 (image=quay.io/ceph/ceph:v20, name=sweet_poincare, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:58:47 np0005603787 ceph-mgr[75453]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2313967653; not ready for session (expect reconnect)
Jan 31 04:58:47 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 04:58:47 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 04:58:47 np0005603787 ceph-mgr[75453]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 04:58:47 np0005603787 podman[90229]: 2026-01-31 09:58:47.573377407 +0000 UTC m=+0.037031276 container create 0c0e34bb252526ed350ce702f5a3c39c97d00e78044b13e601a107e37b4cc6cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_curran, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:58:47 np0005603787 systemd[1]: Started libpod-conmon-0c0e34bb252526ed350ce702f5a3c39c97d00e78044b13e601a107e37b4cc6cc.scope.
Jan 31 04:58:47 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:47 np0005603787 podman[90229]: 2026-01-31 09:58:47.55504781 +0000 UTC m=+0.018701709 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:58:47 np0005603787 podman[90229]: 2026-01-31 09:58:47.654303884 +0000 UTC m=+0.117957773 container init 0c0e34bb252526ed350ce702f5a3c39c97d00e78044b13e601a107e37b4cc6cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_curran, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:58:47 np0005603787 podman[90229]: 2026-01-31 09:58:47.659385372 +0000 UTC m=+0.123039241 container start 0c0e34bb252526ed350ce702f5a3c39c97d00e78044b13e601a107e37b4cc6cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_curran, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 04:58:47 np0005603787 hardcore_curran[90246]: 167 167
Jan 31 04:58:47 np0005603787 systemd[1]: libpod-0c0e34bb252526ed350ce702f5a3c39c97d00e78044b13e601a107e37b4cc6cc.scope: Deactivated successfully.
Jan 31 04:58:47 np0005603787 conmon[90246]: conmon 0c0e34bb252526ed350c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0c0e34bb252526ed350ce702f5a3c39c97d00e78044b13e601a107e37b4cc6cc.scope/container/memory.events
Jan 31 04:58:47 np0005603787 podman[90229]: 2026-01-31 09:58:47.678554632 +0000 UTC m=+0.142208501 container attach 0c0e34bb252526ed350ce702f5a3c39c97d00e78044b13e601a107e37b4cc6cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_curran, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 04:58:47 np0005603787 podman[90229]: 2026-01-31 09:58:47.678987745 +0000 UTC m=+0.142641604 container died 0c0e34bb252526ed350ce702f5a3c39c97d00e78044b13e601a107e37b4cc6cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_curran, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 04:58:47 np0005603787 systemd[1]: var-lib-containers-storage-overlay-b57afc16cf82f61bdb5f85ec06dd556acb7e84dc32e7a5235a9d1ea5a15ac09e-merged.mount: Deactivated successfully.
Jan 31 04:58:47 np0005603787 podman[90229]: 2026-01-31 09:58:47.763803696 +0000 UTC m=+0.227457565 container remove 0c0e34bb252526ed350ce702f5a3c39c97d00e78044b13e601a107e37b4cc6cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_curran, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:58:47 np0005603787 systemd[1]: libpod-conmon-0c0e34bb252526ed350ce702f5a3c39c97d00e78044b13e601a107e37b4cc6cc.scope: Deactivated successfully.
Jan 31 04:58:47 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/1280573327' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 04:58:47 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/1280573327' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 04:58:47 np0005603787 ceph-mon[75160]: OSD bench result of 4082.355380 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 04:58:47 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 31 04:58:47 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1865762876' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 04:58:47 np0005603787 ceph-osd[87996]: osd.2 16 tick checking mon for new map
Jan 31 04:58:47 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Jan 31 04:58:47 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1865762876' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 04:58:47 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Jan 31 04:58:47 np0005603787 sweet_poincare[90196]: pool 'volumes' created
Jan 31 04:58:47 np0005603787 ceph-mon[75160]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/2313967653,v1:192.168.122.100:6811/2313967653] boot
Jan 31 04:58:47 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Jan 31 04:58:47 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 04:58:47 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 04:58:47 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 17 pg[3.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [1] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:58:47 np0005603787 ceph-osd[87996]: osd.2 17 state: booting -> active
Jan 31 04:58:47 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 17 pg[2.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 pi=[16,17)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:58:47 np0005603787 systemd[1]: libpod-5de06edc6da73bbe27dc5bd378861b53dbdd92a03eafbc48c7a2afeb9c1ee733.scope: Deactivated successfully.
Jan 31 04:58:47 np0005603787 podman[90133]: 2026-01-31 09:58:47.878454419 +0000 UTC m=+0.655166876 container died 5de06edc6da73bbe27dc5bd378861b53dbdd92a03eafbc48c7a2afeb9c1ee733 (image=quay.io/ceph/ceph:v20, name=sweet_poincare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:58:47 np0005603787 podman[90273]: 2026-01-31 09:58:47.909023599 +0000 UTC m=+0.063843165 container create 268351bd487baf0cf446ce237593ab35b808563e06a639ee32800c5854a602a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_ride, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:58:47 np0005603787 systemd[1]: var-lib-containers-storage-overlay-4e2d43b547f0f63e7e22becd35216cf599b198dc3df29858df84e9437c760ef8-merged.mount: Deactivated successfully.
Jan 31 04:58:47 np0005603787 podman[90133]: 2026-01-31 09:58:47.951225004 +0000 UTC m=+0.727937461 container remove 5de06edc6da73bbe27dc5bd378861b53dbdd92a03eafbc48c7a2afeb9c1ee733 (image=quay.io/ceph/ceph:v20, name=sweet_poincare, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 04:58:47 np0005603787 systemd[1]: libpod-conmon-5de06edc6da73bbe27dc5bd378861b53dbdd92a03eafbc48c7a2afeb9c1ee733.scope: Deactivated successfully.
Jan 31 04:58:47 np0005603787 podman[90273]: 2026-01-31 09:58:47.866144844 +0000 UTC m=+0.020964440 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:58:48 np0005603787 systemd[1]: Started libpod-conmon-268351bd487baf0cf446ce237593ab35b808563e06a639ee32800c5854a602a5.scope.
Jan 31 04:58:48 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:48 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f472b39681d6b03e76d7679572cdd4f031c1b98b9bb21a8c460ee84f5134bb49/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:48 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f472b39681d6b03e76d7679572cdd4f031c1b98b9bb21a8c460ee84f5134bb49/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:48 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f472b39681d6b03e76d7679572cdd4f031c1b98b9bb21a8c460ee84f5134bb49/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:48 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f472b39681d6b03e76d7679572cdd4f031c1b98b9bb21a8c460ee84f5134bb49/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:48 np0005603787 podman[90273]: 2026-01-31 09:58:48.052996787 +0000 UTC m=+0.207816393 container init 268351bd487baf0cf446ce237593ab35b808563e06a639ee32800c5854a602a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:58:48 np0005603787 podman[90273]: 2026-01-31 09:58:48.059712808 +0000 UTC m=+0.214532384 container start 268351bd487baf0cf446ce237593ab35b808563e06a639ee32800c5854a602a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_ride, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:58:48 np0005603787 podman[90273]: 2026-01-31 09:58:48.072169637 +0000 UTC m=+0.226989223 container attach 268351bd487baf0cf446ce237593ab35b808563e06a639ee32800c5854a602a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_ride, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:58:48 np0005603787 python3[90335]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:58:48 np0005603787 podman[90336]: 2026-01-31 09:58:48.259386799 +0000 UTC m=+0.041027424 container create fdbb23ea19f97c34a6e71a744f238dc061924bcab12dc8d673e8e7102b5425b7 (image=quay.io/ceph/ceph:v20, name=silly_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:58:48 np0005603787 systemd[1]: Started libpod-conmon-fdbb23ea19f97c34a6e71a744f238dc061924bcab12dc8d673e8e7102b5425b7.scope.
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]: {
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:    "0": [
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:        {
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:            "devices": [
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "/dev/loop3"
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:            ],
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:            "lv_name": "ceph_lv0",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:            "lv_size": "21470642176",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:            "name": "ceph_lv0",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:            "tags": {
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "ceph.cluster_name": "ceph",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "ceph.crush_device_class": "",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "ceph.encrypted": "0",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "ceph.objectstore": "bluestore",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "ceph.osd_id": "0",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "ceph.type": "block",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "ceph.vdo": "0",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "ceph.with_tpm": "0"
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:            },
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:            "type": "block",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:            "vg_name": "ceph_vg0"
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:        }
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:    ],
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:    "1": [
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:        {
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:            "devices": [
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "/dev/loop4"
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:            ],
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:            "lv_name": "ceph_lv1",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:            "lv_size": "21470642176",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:            "name": "ceph_lv1",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:            "tags": {
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "ceph.cluster_name": "ceph",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "ceph.crush_device_class": "",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "ceph.encrypted": "0",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "ceph.objectstore": "bluestore",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "ceph.osd_id": "1",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "ceph.type": "block",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "ceph.vdo": "0",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "ceph.with_tpm": "0"
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:            },
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:            "type": "block",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:            "vg_name": "ceph_vg1"
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:        }
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:    ],
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:    "2": [
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:        {
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:            "devices": [
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "/dev/loop5"
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:            ],
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:            "lv_name": "ceph_lv2",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:            "lv_size": "21470642176",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:            "name": "ceph_lv2",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:            "tags": {
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "ceph.cluster_name": "ceph",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "ceph.crush_device_class": "",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "ceph.encrypted": "0",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "ceph.objectstore": "bluestore",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "ceph.osd_id": "2",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "ceph.type": "block",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "ceph.vdo": "0",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:                "ceph.with_tpm": "0"
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:            },
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:            "type": "block",
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:            "vg_name": "ceph_vg2"
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:        }
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]:    ]
Jan 31 04:58:48 np0005603787 nostalgic_ride[90305]: }
Jan 31 04:58:48 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:48 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd68fe08b3d301652398895205415c49989cc7a2bd2affd402313f57d7f98bcb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:48 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd68fe08b3d301652398895205415c49989cc7a2bd2affd402313f57d7f98bcb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:48 np0005603787 podman[90336]: 2026-01-31 09:58:48.334098047 +0000 UTC m=+0.115738692 container init fdbb23ea19f97c34a6e71a744f238dc061924bcab12dc8d673e8e7102b5425b7 (image=quay.io/ceph/ceph:v20, name=silly_varahamihira, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 04:58:48 np0005603787 systemd[1]: libpod-268351bd487baf0cf446ce237593ab35b808563e06a639ee32800c5854a602a5.scope: Deactivated successfully.
Jan 31 04:58:48 np0005603787 podman[90273]: 2026-01-31 09:58:48.335460094 +0000 UTC m=+0.490279690 container died 268351bd487baf0cf446ce237593ab35b808563e06a639ee32800c5854a602a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 04:58:48 np0005603787 podman[90336]: 2026-01-31 09:58:48.239757436 +0000 UTC m=+0.021398081 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:58:48 np0005603787 podman[90336]: 2026-01-31 09:58:48.340397008 +0000 UTC m=+0.122037653 container start fdbb23ea19f97c34a6e71a744f238dc061924bcab12dc8d673e8e7102b5425b7 (image=quay.io/ceph/ceph:v20, name=silly_varahamihira, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 04:58:48 np0005603787 podman[90336]: 2026-01-31 09:58:48.354989054 +0000 UTC m=+0.136629689 container attach fdbb23ea19f97c34a6e71a744f238dc061924bcab12dc8d673e8e7102b5425b7 (image=quay.io/ceph/ceph:v20, name=silly_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:58:48 np0005603787 systemd[1]: var-lib-containers-storage-overlay-f472b39681d6b03e76d7679572cdd4f031c1b98b9bb21a8c460ee84f5134bb49-merged.mount: Deactivated successfully.
Jan 31 04:58:48 np0005603787 podman[90273]: 2026-01-31 09:58:48.403647735 +0000 UTC m=+0.558467301 container remove 268351bd487baf0cf446ce237593ab35b808563e06a639ee32800c5854a602a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_ride, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 04:58:48 np0005603787 systemd[1]: libpod-conmon-268351bd487baf0cf446ce237593ab35b808563e06a639ee32800c5854a602a5.scope: Deactivated successfully.
Jan 31 04:58:48 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 31 04:58:48 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4041490245' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 04:58:48 np0005603787 podman[90451]: 2026-01-31 09:58:48.778572723 +0000 UTC m=+0.043925893 container create a3359bfc0605b81208c3380ba68887d622bc8681226357486a245db1f791c5aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 04:58:48 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/1865762876' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 04:58:48 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/1865762876' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 04:58:48 np0005603787 ceph-mon[75160]: osd.2 [v2:192.168.122.100:6810/2313967653,v1:192.168.122.100:6811/2313967653] boot
Jan 31 04:58:48 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/4041490245' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 04:58:48 np0005603787 systemd[1]: Started libpod-conmon-a3359bfc0605b81208c3380ba68887d622bc8681226357486a245db1f791c5aa.scope.
Jan 31 04:58:48 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:48 np0005603787 podman[90451]: 2026-01-31 09:58:48.846743783 +0000 UTC m=+0.112096963 container init a3359bfc0605b81208c3380ba68887d622bc8681226357486a245db1f791c5aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_thompson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:58:48 np0005603787 podman[90451]: 2026-01-31 09:58:48.754254193 +0000 UTC m=+0.019607373 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:58:48 np0005603787 podman[90451]: 2026-01-31 09:58:48.852962342 +0000 UTC m=+0.118315502 container start a3359bfc0605b81208c3380ba68887d622bc8681226357486a245db1f791c5aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_thompson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 04:58:48 np0005603787 quizzical_thompson[90471]: 167 167
Jan 31 04:58:48 np0005603787 systemd[1]: libpod-a3359bfc0605b81208c3380ba68887d622bc8681226357486a245db1f791c5aa.scope: Deactivated successfully.
Jan 31 04:58:48 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Jan 31 04:58:48 np0005603787 podman[90451]: 2026-01-31 09:58:48.866121289 +0000 UTC m=+0.131474449 container attach a3359bfc0605b81208c3380ba68887d622bc8681226357486a245db1f791c5aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 04:58:48 np0005603787 podman[90451]: 2026-01-31 09:58:48.866685814 +0000 UTC m=+0.132038974 container died a3359bfc0605b81208c3380ba68887d622bc8681226357486a245db1f791c5aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_thompson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:58:48 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4041490245' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 04:58:48 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Jan 31 04:58:48 np0005603787 silly_varahamihira[90355]: pool 'backups' created
Jan 31 04:58:48 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Jan 31 04:58:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 18 pg[4.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [0] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:58:48 np0005603787 systemd[1]: libpod-fdbb23ea19f97c34a6e71a744f238dc061924bcab12dc8d673e8e7102b5425b7.scope: Deactivated successfully.
Jan 31 04:58:48 np0005603787 podman[90336]: 2026-01-31 09:58:48.890275365 +0000 UTC m=+0.671915990 container died fdbb23ea19f97c34a6e71a744f238dc061924bcab12dc8d673e8e7102b5425b7 (image=quay.io/ceph/ceph:v20, name=silly_varahamihira, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 04:58:48 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=17/18 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 pi=[16,17)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:58:48 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 18 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [1] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:58:48 np0005603787 systemd[1]: var-lib-containers-storage-overlay-3843b00525f581237c8245f0489c2fa12f8aac31c6f888add0dccf15df3cd7d1-merged.mount: Deactivated successfully.
Jan 31 04:58:48 np0005603787 systemd[1]: var-lib-containers-storage-overlay-fd68fe08b3d301652398895205415c49989cc7a2bd2affd402313f57d7f98bcb-merged.mount: Deactivated successfully.
Jan 31 04:58:48 np0005603787 podman[90336]: 2026-01-31 09:58:48.969017512 +0000 UTC m=+0.750658137 container remove fdbb23ea19f97c34a6e71a744f238dc061924bcab12dc8d673e8e7102b5425b7 (image=quay.io/ceph/ceph:v20, name=silly_varahamihira, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 04:58:48 np0005603787 systemd[1]: libpod-conmon-fdbb23ea19f97c34a6e71a744f238dc061924bcab12dc8d673e8e7102b5425b7.scope: Deactivated successfully.
Jan 31 04:58:48 np0005603787 podman[90451]: 2026-01-31 09:58:48.994693899 +0000 UTC m=+0.260047059 container remove a3359bfc0605b81208c3380ba68887d622bc8681226357486a245db1f791c5aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_thompson, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 04:58:48 np0005603787 systemd[1]: libpod-conmon-a3359bfc0605b81208c3380ba68887d622bc8681226357486a245db1f791c5aa.scope: Deactivated successfully.
Jan 31 04:58:49 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v42: 4 pgs: 1 creating+peering, 2 unknown, 1 active+clean; 449 KiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Jan 31 04:58:49 np0005603787 podman[90531]: 2026-01-31 09:58:49.125321995 +0000 UTC m=+0.049175766 container create a9707b5b667ffd7c84c0485dc6626e3616701ecb56e46c739c08e733ab767145 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 04:58:49 np0005603787 systemd[1]: Started libpod-conmon-a9707b5b667ffd7c84c0485dc6626e3616701ecb56e46c739c08e733ab767145.scope.
Jan 31 04:58:49 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:49 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dabf5d83984f13a713ce33fcfccc205f3d1dfa8276cb8f0a10a467cf6833d77c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:49 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dabf5d83984f13a713ce33fcfccc205f3d1dfa8276cb8f0a10a467cf6833d77c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:49 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dabf5d83984f13a713ce33fcfccc205f3d1dfa8276cb8f0a10a467cf6833d77c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:49 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dabf5d83984f13a713ce33fcfccc205f3d1dfa8276cb8f0a10a467cf6833d77c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:49 np0005603787 podman[90531]: 2026-01-31 09:58:49.10083278 +0000 UTC m=+0.024686601 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:58:49 np0005603787 podman[90531]: 2026-01-31 09:58:49.198487231 +0000 UTC m=+0.122341012 container init a9707b5b667ffd7c84c0485dc6626e3616701ecb56e46c739c08e733ab767145 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_noether, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:58:49 np0005603787 podman[90531]: 2026-01-31 09:58:49.203672031 +0000 UTC m=+0.127525802 container start a9707b5b667ffd7c84c0485dc6626e3616701ecb56e46c739c08e733ab767145 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_noether, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 04:58:49 np0005603787 podman[90531]: 2026-01-31 09:58:49.212142132 +0000 UTC m=+0.135995983 container attach a9707b5b667ffd7c84c0485dc6626e3616701ecb56e46c739c08e733ab767145 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_noether, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 04:58:49 np0005603787 python3[90540]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:58:49 np0005603787 podman[90556]: 2026-01-31 09:58:49.271916154 +0000 UTC m=+0.040385507 container create d9b7608dc44ccd32ef1a6cd537ec5b5e14d8aa898a203236d1b8a9d28720c8a1 (image=quay.io/ceph/ceph:v20, name=sweet_meitner, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 04:58:49 np0005603787 systemd[1]: Started libpod-conmon-d9b7608dc44ccd32ef1a6cd537ec5b5e14d8aa898a203236d1b8a9d28720c8a1.scope.
Jan 31 04:58:49 np0005603787 podman[90556]: 2026-01-31 09:58:49.253653368 +0000 UTC m=+0.022122751 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:58:49 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:49 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/775da50da8f11c2a5077a9f0c354b1030f53b4e0f2f5001db286b038095ac460/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:49 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/775da50da8f11c2a5077a9f0c354b1030f53b4e0f2f5001db286b038095ac460/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:49 np0005603787 podman[90556]: 2026-01-31 09:58:49.370401838 +0000 UTC m=+0.138871211 container init d9b7608dc44ccd32ef1a6cd537ec5b5e14d8aa898a203236d1b8a9d28720c8a1 (image=quay.io/ceph/ceph:v20, name=sweet_meitner, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:58:49 np0005603787 podman[90556]: 2026-01-31 09:58:49.37527385 +0000 UTC m=+0.143743213 container start d9b7608dc44ccd32ef1a6cd537ec5b5e14d8aa898a203236d1b8a9d28720c8a1 (image=quay.io/ceph/ceph:v20, name=sweet_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:58:49 np0005603787 podman[90556]: 2026-01-31 09:58:49.378998341 +0000 UTC m=+0.147467714 container attach d9b7608dc44ccd32ef1a6cd537ec5b5e14d8aa898a203236d1b8a9d28720c8a1 (image=quay.io/ceph/ceph:v20, name=sweet_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 04:58:49 np0005603787 lvm[90669]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 04:58:49 np0005603787 lvm[90668]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 04:58:49 np0005603787 lvm[90669]: VG ceph_vg1 finished
Jan 31 04:58:49 np0005603787 lvm[90668]: VG ceph_vg0 finished
Jan 31 04:58:49 np0005603787 lvm[90671]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 04:58:49 np0005603787 lvm[90671]: VG ceph_vg2 finished
Jan 31 04:58:49 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 31 04:58:49 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4127120643' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 04:58:49 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Jan 31 04:58:49 np0005603787 elegant_noether[90551]: {}
Jan 31 04:58:49 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4127120643' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 04:58:49 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Jan 31 04:58:49 np0005603787 sweet_meitner[90572]: pool 'images' created
Jan 31 04:58:49 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/4041490245' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 04:58:49 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/4127120643' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 04:58:49 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Jan 31 04:58:49 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 19 pg[4.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [0] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:58:49 np0005603787 systemd[1]: libpod-d9b7608dc44ccd32ef1a6cd537ec5b5e14d8aa898a203236d1b8a9d28720c8a1.scope: Deactivated successfully.
Jan 31 04:58:49 np0005603787 podman[90556]: 2026-01-31 09:58:49.908182446 +0000 UTC m=+0.676651809 container died d9b7608dc44ccd32ef1a6cd537ec5b5e14d8aa898a203236d1b8a9d28720c8a1 (image=quay.io/ceph/ceph:v20, name=sweet_meitner, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Jan 31 04:58:49 np0005603787 systemd[1]: libpod-a9707b5b667ffd7c84c0485dc6626e3616701ecb56e46c739c08e733ab767145.scope: Deactivated successfully.
Jan 31 04:58:49 np0005603787 podman[90531]: 2026-01-31 09:58:49.933642477 +0000 UTC m=+0.857496268 container died a9707b5b667ffd7c84c0485dc6626e3616701ecb56e46c739c08e733ab767145 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_noether, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:58:49 np0005603787 systemd[1]: var-lib-containers-storage-overlay-775da50da8f11c2a5077a9f0c354b1030f53b4e0f2f5001db286b038095ac460-merged.mount: Deactivated successfully.
Jan 31 04:58:49 np0005603787 podman[90556]: 2026-01-31 09:58:49.978475484 +0000 UTC m=+0.746944837 container remove d9b7608dc44ccd32ef1a6cd537ec5b5e14d8aa898a203236d1b8a9d28720c8a1 (image=quay.io/ceph/ceph:v20, name=sweet_meitner, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 04:58:49 np0005603787 systemd[1]: libpod-conmon-d9b7608dc44ccd32ef1a6cd537ec5b5e14d8aa898a203236d1b8a9d28720c8a1.scope: Deactivated successfully.
Jan 31 04:58:49 np0005603787 systemd[1]: var-lib-containers-storage-overlay-dabf5d83984f13a713ce33fcfccc205f3d1dfa8276cb8f0a10a467cf6833d77c-merged.mount: Deactivated successfully.
Jan 31 04:58:50 np0005603787 podman[90531]: 2026-01-31 09:58:50.019335213 +0000 UTC m=+0.943188984 container remove a9707b5b667ffd7c84c0485dc6626e3616701ecb56e46c739c08e733ab767145 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True)
Jan 31 04:58:50 np0005603787 systemd[1]: libpod-conmon-a9707b5b667ffd7c84c0485dc6626e3616701ecb56e46c739c08e733ab767145.scope: Deactivated successfully.
Jan 31 04:58:50 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 04:58:50 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:50 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 04:58:50 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 19 pg[5.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:58:50 np0005603787 python3[90735]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:58:50 np0005603787 podman[90752]: 2026-01-31 09:58:50.285155939 +0000 UTC m=+0.044349425 container create 9521895a9dd6fa2ef0310954cc310c86690d1a0e09774ea2cab122db6a66fa63 (image=quay.io/ceph/ceph:v20, name=wizardly_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 04:58:50 np0005603787 systemd[1]: Started libpod-conmon-9521895a9dd6fa2ef0310954cc310c86690d1a0e09774ea2cab122db6a66fa63.scope.
Jan 31 04:58:50 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:50 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/769aa6f055236329eea5cfe963fd44d3608dba5345a004fd8ea4cd80b82d1894/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:50 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/769aa6f055236329eea5cfe963fd44d3608dba5345a004fd8ea4cd80b82d1894/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:50 np0005603787 podman[90752]: 2026-01-31 09:58:50.265599588 +0000 UTC m=+0.024793104 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:58:50 np0005603787 podman[90752]: 2026-01-31 09:58:50.367753741 +0000 UTC m=+0.126947317 container init 9521895a9dd6fa2ef0310954cc310c86690d1a0e09774ea2cab122db6a66fa63 (image=quay.io/ceph/ceph:v20, name=wizardly_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 04:58:50 np0005603787 podman[90752]: 2026-01-31 09:58:50.372914232 +0000 UTC m=+0.132107758 container start 9521895a9dd6fa2ef0310954cc310c86690d1a0e09774ea2cab122db6a66fa63 (image=quay.io/ceph/ceph:v20, name=wizardly_mendeleev, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:58:50 np0005603787 podman[90752]: 2026-01-31 09:58:50.378372319 +0000 UTC m=+0.137565805 container attach 9521895a9dd6fa2ef0310954cc310c86690d1a0e09774ea2cab122db6a66fa63 (image=quay.io/ceph/ceph:v20, name=wizardly_mendeleev, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:58:50 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 31 04:58:50 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3588142473' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 04:58:50 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Jan 31 04:58:50 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3588142473' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 04:58:50 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Jan 31 04:58:50 np0005603787 wizardly_mendeleev[90767]: pool 'cephfs.cephfs.meta' created
Jan 31 04:58:50 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Jan 31 04:58:50 np0005603787 systemd[1]: libpod-9521895a9dd6fa2ef0310954cc310c86690d1a0e09774ea2cab122db6a66fa63.scope: Deactivated successfully.
Jan 31 04:58:50 np0005603787 conmon[90767]: conmon 9521895a9dd6fa2ef031 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9521895a9dd6fa2ef0310954cc310c86690d1a0e09774ea2cab122db6a66fa63.scope/container/memory.events
Jan 31 04:58:50 np0005603787 podman[90752]: 2026-01-31 09:58:50.917454992 +0000 UTC m=+0.676648488 container died 9521895a9dd6fa2ef0310954cc310c86690d1a0e09774ea2cab122db6a66fa63 (image=quay.io/ceph/ceph:v20, name=wizardly_mendeleev, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:58:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 20 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:58:50 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/4127120643' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 04:58:50 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:50 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:58:50 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/3588142473' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 04:58:50 np0005603787 systemd[1]: var-lib-containers-storage-overlay-769aa6f055236329eea5cfe963fd44d3608dba5345a004fd8ea4cd80b82d1894-merged.mount: Deactivated successfully.
Jan 31 04:58:50 np0005603787 podman[90752]: 2026-01-31 09:58:50.971975472 +0000 UTC m=+0.731168988 container remove 9521895a9dd6fa2ef0310954cc310c86690d1a0e09774ea2cab122db6a66fa63 (image=quay.io/ceph/ceph:v20, name=wizardly_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:58:50 np0005603787 systemd[1]: libpod-conmon-9521895a9dd6fa2ef0310954cc310c86690d1a0e09774ea2cab122db6a66fa63.scope: Deactivated successfully.
Jan 31 04:58:50 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 20 pg[6.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:58:51 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v45: 6 pgs: 2 active+clean, 1 creating+peering, 3 unknown; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Jan 31 04:58:51 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e20 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 04:58:51 np0005603787 python3[90833]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:58:51 np0005603787 podman[90834]: 2026-01-31 09:58:51.390644047 +0000 UTC m=+0.116018979 container create bc8cb860731da9f81f2db6c3c6c2733eeafaafa7880efacfeac4f2aa938665ec (image=quay.io/ceph/ceph:v20, name=nostalgic_kilby, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:58:51 np0005603787 podman[90834]: 2026-01-31 09:58:51.300221923 +0000 UTC m=+0.025596845 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:58:51 np0005603787 systemd[1]: Started libpod-conmon-bc8cb860731da9f81f2db6c3c6c2733eeafaafa7880efacfeac4f2aa938665ec.scope.
Jan 31 04:58:51 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:51 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/392d26b8da6a37bea3cf27e99b6902b3f71a975e5da1d55858accfb056823742/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:51 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/392d26b8da6a37bea3cf27e99b6902b3f71a975e5da1d55858accfb056823742/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:51 np0005603787 podman[90834]: 2026-01-31 09:58:51.628044942 +0000 UTC m=+0.353419874 container init bc8cb860731da9f81f2db6c3c6c2733eeafaafa7880efacfeac4f2aa938665ec (image=quay.io/ceph/ceph:v20, name=nostalgic_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 04:58:51 np0005603787 podman[90834]: 2026-01-31 09:58:51.633207572 +0000 UTC m=+0.358582474 container start bc8cb860731da9f81f2db6c3c6c2733eeafaafa7880efacfeac4f2aa938665ec (image=quay.io/ceph/ceph:v20, name=nostalgic_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:58:51 np0005603787 podman[90834]: 2026-01-31 09:58:51.671019189 +0000 UTC m=+0.396394101 container attach bc8cb860731da9f81f2db6c3c6c2733eeafaafa7880efacfeac4f2aa938665ec (image=quay.io/ceph/ceph:v20, name=nostalgic_kilby, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:58:51 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Jan 31 04:58:51 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Jan 31 04:58:51 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Jan 31 04:58:52 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 31 04:58:52 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/5013988' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 04:58:52 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/3588142473' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 04:58:52 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 21 pg[6.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:58:52 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Jan 31 04:58:53 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/5013988' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 04:58:53 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Jan 31 04:58:53 np0005603787 nostalgic_kilby[90849]: pool 'cephfs.cephfs.data' created
Jan 31 04:58:53 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v47: 6 pgs: 4 active+clean, 1 creating+peering, 1 unknown; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 31 04:58:53 np0005603787 systemd[1]: libpod-bc8cb860731da9f81f2db6c3c6c2733eeafaafa7880efacfeac4f2aa938665ec.scope: Deactivated successfully.
Jan 31 04:58:53 np0005603787 podman[90834]: 2026-01-31 09:58:53.047677448 +0000 UTC m=+1.773052340 container died bc8cb860731da9f81f2db6c3c6c2733eeafaafa7880efacfeac4f2aa938665ec (image=quay.io/ceph/ceph:v20, name=nostalgic_kilby, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 04:58:53 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Jan 31 04:58:53 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 22 pg[7.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:58:53 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/5013988' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 04:58:53 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/5013988' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 04:58:53 np0005603787 systemd[1]: var-lib-containers-storage-overlay-392d26b8da6a37bea3cf27e99b6902b3f71a975e5da1d55858accfb056823742-merged.mount: Deactivated successfully.
Jan 31 04:58:54 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Jan 31 04:58:54 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Jan 31 04:58:54 np0005603787 podman[90834]: 2026-01-31 09:58:54.343507363 +0000 UTC m=+3.068882265 container remove bc8cb860731da9f81f2db6c3c6c2733eeafaafa7880efacfeac4f2aa938665ec (image=quay.io/ceph/ceph:v20, name=nostalgic_kilby, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:58:54 np0005603787 systemd[1]: libpod-conmon-bc8cb860731da9f81f2db6c3c6c2733eeafaafa7880efacfeac4f2aa938665ec.scope: Deactivated successfully.
Jan 31 04:58:54 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Jan 31 04:58:54 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 23 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:58:54 np0005603787 python3[90916]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:58:54 np0005603787 podman[90917]: 2026-01-31 09:58:54.706433096 +0000 UTC m=+0.071993636 container create ce9e3ae63119fd20a63236132f371bce9aab84d2f307d24b5051073026cf73c6 (image=quay.io/ceph/ceph:v20, name=hardcore_dubinsky, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 04:58:54 np0005603787 podman[90917]: 2026-01-31 09:58:54.654769773 +0000 UTC m=+0.020330343 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:58:54 np0005603787 systemd[1]: Started libpod-conmon-ce9e3ae63119fd20a63236132f371bce9aab84d2f307d24b5051073026cf73c6.scope.
Jan 31 04:58:54 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:54 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30f6eaad92d49f16c15533580edace972f8cbdde71bed3834b408ba846a15f25/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:54 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30f6eaad92d49f16c15533580edace972f8cbdde71bed3834b408ba846a15f25/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:54 np0005603787 podman[90917]: 2026-01-31 09:58:54.953969354 +0000 UTC m=+0.319529844 container init ce9e3ae63119fd20a63236132f371bce9aab84d2f307d24b5051073026cf73c6 (image=quay.io/ceph/ceph:v20, name=hardcore_dubinsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:58:54 np0005603787 podman[90917]: 2026-01-31 09:58:54.958919639 +0000 UTC m=+0.324480139 container start ce9e3ae63119fd20a63236132f371bce9aab84d2f307d24b5051073026cf73c6 (image=quay.io/ceph/ceph:v20, name=hardcore_dubinsky, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:58:54 np0005603787 podman[90917]: 2026-01-31 09:58:54.970726699 +0000 UTC m=+0.336287199 container attach ce9e3ae63119fd20a63236132f371bce9aab84d2f307d24b5051073026cf73c6 (image=quay.io/ceph/ceph:v20, name=hardcore_dubinsky, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:58:55 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v50: 7 pgs: 6 active+clean, 1 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 04:58:55 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Jan 31 04:58:55 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/391499805' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Jan 31 04:58:55 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Jan 31 04:58:55 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/391499805' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 31 04:58:55 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Jan 31 04:58:55 np0005603787 hardcore_dubinsky[90932]: enabled application 'rbd' on pool 'vms'
Jan 31 04:58:55 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Jan 31 04:58:55 np0005603787 systemd[1]: libpod-ce9e3ae63119fd20a63236132f371bce9aab84d2f307d24b5051073026cf73c6.scope: Deactivated successfully.
Jan 31 04:58:55 np0005603787 podman[90957]: 2026-01-31 09:58:55.474276198 +0000 UTC m=+0.033203402 container died ce9e3ae63119fd20a63236132f371bce9aab84d2f307d24b5051073026cf73c6 (image=quay.io/ceph/ceph:v20, name=hardcore_dubinsky, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 04:58:55 np0005603787 systemd[1]: var-lib-containers-storage-overlay-30f6eaad92d49f16c15533580edace972f8cbdde71bed3834b408ba846a15f25-merged.mount: Deactivated successfully.
Jan 31 04:58:55 np0005603787 podman[90957]: 2026-01-31 09:58:55.534318079 +0000 UTC m=+0.093245283 container remove ce9e3ae63119fd20a63236132f371bce9aab84d2f307d24b5051073026cf73c6 (image=quay.io/ceph/ceph:v20, name=hardcore_dubinsky, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:58:55 np0005603787 systemd[1]: libpod-conmon-ce9e3ae63119fd20a63236132f371bce9aab84d2f307d24b5051073026cf73c6.scope: Deactivated successfully.
Jan 31 04:58:55 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/391499805' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Jan 31 04:58:55 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/391499805' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 31 04:58:55 np0005603787 python3[90997]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:58:55 np0005603787 podman[90998]: 2026-01-31 09:58:55.880873145 +0000 UTC m=+0.060204795 container create 9bee7c956dc3b0ef341fb9391dc953640fca42fd24b54614d5444331ca161c97 (image=quay.io/ceph/ceph:v20, name=nice_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 04:58:55 np0005603787 systemd[1]: Started libpod-conmon-9bee7c956dc3b0ef341fb9391dc953640fca42fd24b54614d5444331ca161c97.scope.
Jan 31 04:58:55 np0005603787 podman[90998]: 2026-01-31 09:58:55.843870551 +0000 UTC m=+0.023202231 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:58:55 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:55 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68167b12d8efa66e885fc700cdfeb213d2b7a30347552970b794c9c59a050bff/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:55 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68167b12d8efa66e885fc700cdfeb213d2b7a30347552970b794c9c59a050bff/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:55 np0005603787 podman[90998]: 2026-01-31 09:58:55.963320553 +0000 UTC m=+0.142652223 container init 9bee7c956dc3b0ef341fb9391dc953640fca42fd24b54614d5444331ca161c97 (image=quay.io/ceph/ceph:v20, name=nice_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:58:55 np0005603787 podman[90998]: 2026-01-31 09:58:55.969139581 +0000 UTC m=+0.148471231 container start 9bee7c956dc3b0ef341fb9391dc953640fca42fd24b54614d5444331ca161c97 (image=quay.io/ceph/ceph:v20, name=nice_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 04:58:55 np0005603787 podman[90998]: 2026-01-31 09:58:55.980340315 +0000 UTC m=+0.159672045 container attach 9bee7c956dc3b0ef341fb9391dc953640fca42fd24b54614d5444331ca161c97 (image=quay.io/ceph/ceph:v20, name=nice_wilbur, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:58:56 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e24 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 04:58:56 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Jan 31 04:58:56 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/58126533' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Jan 31 04:58:56 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Jan 31 04:58:56 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/58126533' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Jan 31 04:58:56 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/58126533' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 31 04:58:56 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Jan 31 04:58:56 np0005603787 nice_wilbur[91014]: enabled application 'rbd' on pool 'volumes'
Jan 31 04:58:56 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Jan 31 04:58:56 np0005603787 systemd[1]: libpod-9bee7c956dc3b0ef341fb9391dc953640fca42fd24b54614d5444331ca161c97.scope: Deactivated successfully.
Jan 31 04:58:56 np0005603787 podman[90998]: 2026-01-31 09:58:56.686362901 +0000 UTC m=+0.865694551 container died 9bee7c956dc3b0ef341fb9391dc953640fca42fd24b54614d5444331ca161c97 (image=quay.io/ceph/ceph:v20, name=nice_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 31 04:58:56 np0005603787 systemd[1]: var-lib-containers-storage-overlay-68167b12d8efa66e885fc700cdfeb213d2b7a30347552970b794c9c59a050bff-merged.mount: Deactivated successfully.
Jan 31 04:58:56 np0005603787 podman[90998]: 2026-01-31 09:58:56.742511045 +0000 UTC m=+0.921842705 container remove 9bee7c956dc3b0ef341fb9391dc953640fca42fd24b54614d5444331ca161c97 (image=quay.io/ceph/ceph:v20, name=nice_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:58:56 np0005603787 systemd[1]: libpod-conmon-9bee7c956dc3b0ef341fb9391dc953640fca42fd24b54614d5444331ca161c97.scope: Deactivated successfully.
Jan 31 04:58:57 np0005603787 python3[91076]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:58:57 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v53: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 04:58:57 np0005603787 podman[91077]: 2026-01-31 09:58:57.067940079 +0000 UTC m=+0.047676525 container create 4ce4436a94aac3bcbcd6198cd34141c4f6ddd4510e88e7e9ac7864c46a142ad4 (image=quay.io/ceph/ceph:v20, name=quizzical_fermi, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3)
Jan 31 04:58:57 np0005603787 systemd[1]: Started libpod-conmon-4ce4436a94aac3bcbcd6198cd34141c4f6ddd4510e88e7e9ac7864c46a142ad4.scope.
Jan 31 04:58:57 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:57 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9e3ebe6aa5969d59299af4304ea1fac1cabe78ad0d8241da35301f51eb16de9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:57 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9e3ebe6aa5969d59299af4304ea1fac1cabe78ad0d8241da35301f51eb16de9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:57 np0005603787 podman[91077]: 2026-01-31 09:58:57.13464103 +0000 UTC m=+0.114377476 container init 4ce4436a94aac3bcbcd6198cd34141c4f6ddd4510e88e7e9ac7864c46a142ad4 (image=quay.io/ceph/ceph:v20, name=quizzical_fermi, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:58:57 np0005603787 podman[91077]: 2026-01-31 09:58:57.139668026 +0000 UTC m=+0.119404462 container start 4ce4436a94aac3bcbcd6198cd34141c4f6ddd4510e88e7e9ac7864c46a142ad4 (image=quay.io/ceph/ceph:v20, name=quizzical_fermi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:58:57 np0005603787 podman[91077]: 2026-01-31 09:58:57.046961489 +0000 UTC m=+0.026697955 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:58:57 np0005603787 podman[91077]: 2026-01-31 09:58:57.144307972 +0000 UTC m=+0.124044458 container attach 4ce4436a94aac3bcbcd6198cd34141c4f6ddd4510e88e7e9ac7864c46a142ad4 (image=quay.io/ceph/ceph:v20, name=quizzical_fermi, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:58:57 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Jan 31 04:58:57 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2637026517' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Jan 31 04:58:57 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Jan 31 04:58:57 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/58126533' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 31 04:58:57 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/2637026517' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Jan 31 04:58:57 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2637026517' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 31 04:58:57 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Jan 31 04:58:57 np0005603787 quizzical_fermi[91090]: enabled application 'rbd' on pool 'backups'
Jan 31 04:58:57 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Jan 31 04:58:57 np0005603787 systemd[1]: libpod-4ce4436a94aac3bcbcd6198cd34141c4f6ddd4510e88e7e9ac7864c46a142ad4.scope: Deactivated successfully.
Jan 31 04:58:57 np0005603787 podman[91077]: 2026-01-31 09:58:57.72114928 +0000 UTC m=+0.700885726 container died 4ce4436a94aac3bcbcd6198cd34141c4f6ddd4510e88e7e9ac7864c46a142ad4 (image=quay.io/ceph/ceph:v20, name=quizzical_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 04:58:57 np0005603787 systemd[1]: var-lib-containers-storage-overlay-f9e3ebe6aa5969d59299af4304ea1fac1cabe78ad0d8241da35301f51eb16de9-merged.mount: Deactivated successfully.
Jan 31 04:58:57 np0005603787 podman[91077]: 2026-01-31 09:58:57.767097648 +0000 UTC m=+0.746834094 container remove 4ce4436a94aac3bcbcd6198cd34141c4f6ddd4510e88e7e9ac7864c46a142ad4 (image=quay.io/ceph/ceph:v20, name=quizzical_fermi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:58:57 np0005603787 systemd[1]: libpod-conmon-4ce4436a94aac3bcbcd6198cd34141c4f6ddd4510e88e7e9ac7864c46a142ad4.scope: Deactivated successfully.
Jan 31 04:58:58 np0005603787 python3[91151]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:58:58 np0005603787 podman[91152]: 2026-01-31 09:58:58.119268487 +0000 UTC m=+0.082622374 container create 91f83df667430fd304a3d02923201a26a635417ee7f4444b6a2138e78200760d (image=quay.io/ceph/ceph:v20, name=condescending_satoshi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:58:58 np0005603787 podman[91152]: 2026-01-31 09:58:58.060708727 +0000 UTC m=+0.024062644 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:58:58 np0005603787 systemd[1]: Started libpod-conmon-91f83df667430fd304a3d02923201a26a635417ee7f4444b6a2138e78200760d.scope.
Jan 31 04:58:58 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:58 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef6bb837a35bfd25b347dbd36b45e0acb6341dd190729e6b6773cfe6ce52eb5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:58 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef6bb837a35bfd25b347dbd36b45e0acb6341dd190729e6b6773cfe6ce52eb5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:58 np0005603787 podman[91152]: 2026-01-31 09:58:58.327953932 +0000 UTC m=+0.291307849 container init 91f83df667430fd304a3d02923201a26a635417ee7f4444b6a2138e78200760d (image=quay.io/ceph/ceph:v20, name=condescending_satoshi, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:58:58 np0005603787 podman[91152]: 2026-01-31 09:58:58.334646554 +0000 UTC m=+0.298000441 container start 91f83df667430fd304a3d02923201a26a635417ee7f4444b6a2138e78200760d (image=quay.io/ceph/ceph:v20, name=condescending_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 04:58:58 np0005603787 podman[91152]: 2026-01-31 09:58:58.357113454 +0000 UTC m=+0.320467361 container attach 91f83df667430fd304a3d02923201a26a635417ee7f4444b6a2138e78200760d (image=quay.io/ceph/ceph:v20, name=condescending_satoshi, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:58:58 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Jan 31 04:58:58 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/909072576' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Jan 31 04:58:58 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/2637026517' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 31 04:58:58 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Jan 31 04:58:58 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/909072576' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 31 04:58:58 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Jan 31 04:58:58 np0005603787 condescending_satoshi[91167]: enabled application 'rbd' on pool 'images'
Jan 31 04:58:58 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Jan 31 04:58:58 np0005603787 systemd[1]: libpod-91f83df667430fd304a3d02923201a26a635417ee7f4444b6a2138e78200760d.scope: Deactivated successfully.
Jan 31 04:58:58 np0005603787 podman[91152]: 2026-01-31 09:58:58.847951718 +0000 UTC m=+0.811305625 container died 91f83df667430fd304a3d02923201a26a635417ee7f4444b6a2138e78200760d (image=quay.io/ceph/ceph:v20, name=condescending_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:58:58 np0005603787 systemd[1]: var-lib-containers-storage-overlay-7ef6bb837a35bfd25b347dbd36b45e0acb6341dd190729e6b6773cfe6ce52eb5-merged.mount: Deactivated successfully.
Jan 31 04:58:59 np0005603787 podman[91152]: 2026-01-31 09:58:58.999647186 +0000 UTC m=+0.963001073 container remove 91f83df667430fd304a3d02923201a26a635417ee7f4444b6a2138e78200760d (image=quay.io/ceph/ceph:v20, name=condescending_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:58:59 np0005603787 systemd[1]: libpod-conmon-91f83df667430fd304a3d02923201a26a635417ee7f4444b6a2138e78200760d.scope: Deactivated successfully.
Jan 31 04:58:59 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v56: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 04:58:59 np0005603787 python3[91231]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:58:59 np0005603787 podman[91232]: 2026-01-31 09:58:59.360479891 +0000 UTC m=+0.056659790 container create e7440a56a7cc3b37466a208c117ca5f685c265b569b06e117fc0b798cba1c936 (image=quay.io/ceph/ceph:v20, name=hopeful_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 04:58:59 np0005603787 systemd[1]: Started libpod-conmon-e7440a56a7cc3b37466a208c117ca5f685c265b569b06e117fc0b798cba1c936.scope.
Jan 31 04:58:59 np0005603787 podman[91232]: 2026-01-31 09:58:59.323502256 +0000 UTC m=+0.019682175 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:58:59 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:58:59 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11e54b053b6676039022bbfc084b532a41c51e086750997214635b63b3fa9a7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:59 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11e54b053b6676039022bbfc084b532a41c51e086750997214635b63b3fa9a7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:58:59 np0005603787 podman[91232]: 2026-01-31 09:58:59.462750167 +0000 UTC m=+0.158930096 container init e7440a56a7cc3b37466a208c117ca5f685c265b569b06e117fc0b798cba1c936 (image=quay.io/ceph/ceph:v20, name=hopeful_goodall, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 04:58:59 np0005603787 podman[91232]: 2026-01-31 09:58:59.466633681 +0000 UTC m=+0.162813580 container start e7440a56a7cc3b37466a208c117ca5f685c265b569b06e117fc0b798cba1c936 (image=quay.io/ceph/ceph:v20, name=hopeful_goodall, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:58:59 np0005603787 podman[91232]: 2026-01-31 09:58:59.482281437 +0000 UTC m=+0.178461336 container attach e7440a56a7cc3b37466a208c117ca5f685c265b569b06e117fc0b798cba1c936 (image=quay.io/ceph/ceph:v20, name=hopeful_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:58:59 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/909072576' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Jan 31 04:58:59 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/909072576' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 31 04:58:59 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Jan 31 04:58:59 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2456375565' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Jan 31 04:59:00 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/2456375565' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Jan 31 04:59:00 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Jan 31 04:59:00 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2456375565' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 31 04:59:00 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Jan 31 04:59:00 np0005603787 hopeful_goodall[91248]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Jan 31 04:59:00 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Jan 31 04:59:00 np0005603787 systemd[1]: libpod-e7440a56a7cc3b37466a208c117ca5f685c265b569b06e117fc0b798cba1c936.scope: Deactivated successfully.
Jan 31 04:59:00 np0005603787 podman[91232]: 2026-01-31 09:59:00.897227585 +0000 UTC m=+1.593407484 container died e7440a56a7cc3b37466a208c117ca5f685c265b569b06e117fc0b798cba1c936 (image=quay.io/ceph/ceph:v20, name=hopeful_goodall, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 04:59:01 np0005603787 systemd[1]: var-lib-containers-storage-overlay-b11e54b053b6676039022bbfc084b532a41c51e086750997214635b63b3fa9a7-merged.mount: Deactivated successfully.
Jan 31 04:59:01 np0005603787 podman[91232]: 2026-01-31 09:59:01.038589073 +0000 UTC m=+1.734768982 container remove e7440a56a7cc3b37466a208c117ca5f685c265b569b06e117fc0b798cba1c936 (image=quay.io/ceph/ceph:v20, name=hopeful_goodall, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:59:01 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v58: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 04:59:01 np0005603787 systemd[1]: libpod-conmon-e7440a56a7cc3b37466a208c117ca5f685c265b569b06e117fc0b798cba1c936.scope: Deactivated successfully.
Jan 31 04:59:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 04:59:01 np0005603787 python3[91308]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:59:01 np0005603787 podman[91309]: 2026-01-31 09:59:01.335333898 +0000 UTC m=+0.052973169 container create c7c472110bab3b36e2b96061061850723426207eb8cff8ddb95e5156fe21f8f5 (image=quay.io/ceph/ceph:v20, name=reverent_einstein, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 04:59:01 np0005603787 systemd[1]: Started libpod-conmon-c7c472110bab3b36e2b96061061850723426207eb8cff8ddb95e5156fe21f8f5.scope.
Jan 31 04:59:01 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:01 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b74bf7ac53411f80eacf961ac8a3137755e0319cc0463997b92703f3e16edaad/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:01 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b74bf7ac53411f80eacf961ac8a3137755e0319cc0463997b92703f3e16edaad/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:01 np0005603787 podman[91309]: 2026-01-31 09:59:01.306766883 +0000 UTC m=+0.024406204 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:59:01 np0005603787 podman[91309]: 2026-01-31 09:59:01.413021037 +0000 UTC m=+0.130660328 container init c7c472110bab3b36e2b96061061850723426207eb8cff8ddb95e5156fe21f8f5 (image=quay.io/ceph/ceph:v20, name=reverent_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 31 04:59:01 np0005603787 podman[91309]: 2026-01-31 09:59:01.418213508 +0000 UTC m=+0.135852769 container start c7c472110bab3b36e2b96061061850723426207eb8cff8ddb95e5156fe21f8f5 (image=quay.io/ceph/ceph:v20, name=reverent_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 04:59:01 np0005603787 podman[91309]: 2026-01-31 09:59:01.422460803 +0000 UTC m=+0.140100104 container attach c7c472110bab3b36e2b96061061850723426207eb8cff8ddb95e5156fe21f8f5 (image=quay.io/ceph/ceph:v20, name=reverent_einstein, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:59:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Jan 31 04:59:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/155971132' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Jan 31 04:59:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Jan 31 04:59:01 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/2456375565' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 31 04:59:01 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/155971132' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Jan 31 04:59:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/155971132' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 31 04:59:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Jan 31 04:59:01 np0005603787 reverent_einstein[91324]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Jan 31 04:59:01 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Jan 31 04:59:01 np0005603787 systemd[1]: libpod-c7c472110bab3b36e2b96061061850723426207eb8cff8ddb95e5156fe21f8f5.scope: Deactivated successfully.
Jan 31 04:59:01 np0005603787 podman[91309]: 2026-01-31 09:59:01.959724577 +0000 UTC m=+0.677363858 container died c7c472110bab3b36e2b96061061850723426207eb8cff8ddb95e5156fe21f8f5 (image=quay.io/ceph/ceph:v20, name=reverent_einstein, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 04:59:01 np0005603787 systemd[1]: var-lib-containers-storage-overlay-b74bf7ac53411f80eacf961ac8a3137755e0319cc0463997b92703f3e16edaad-merged.mount: Deactivated successfully.
Jan 31 04:59:02 np0005603787 podman[91309]: 2026-01-31 09:59:02.014828173 +0000 UTC m=+0.732467444 container remove c7c472110bab3b36e2b96061061850723426207eb8cff8ddb95e5156fe21f8f5 (image=quay.io/ceph/ceph:v20, name=reverent_einstein, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:59:02 np0005603787 systemd[1]: libpod-conmon-c7c472110bab3b36e2b96061061850723426207eb8cff8ddb95e5156fe21f8f5.scope: Deactivated successfully.
Jan 31 04:59:02 np0005603787 python3[91437]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 04:59:02 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/155971132' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 31 04:59:03 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v60: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 04:59:03 np0005603787 python3[91508]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769853542.565681-36650-68019882407893/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:59:03 np0005603787 python3[91610]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 04:59:03 np0005603787 python3[91685]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769853543.3624055-36664-71958305025649/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=c2a6f4818b2d3e9aefb86b9dfa65e084fb108f7f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:59:04 np0005603787 python3[91735]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:59:04 np0005603787 podman[91736]: 2026-01-31 09:59:04.259543147 +0000 UTC m=+0.039699104 container create 57c6d57e10b72a01a65d82062a13c289b2aec79f750f3f0e00ef2d39105cccaa (image=quay.io/ceph/ceph:v20, name=amazing_robinson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 04:59:04 np0005603787 systemd[1]: Started libpod-conmon-57c6d57e10b72a01a65d82062a13c289b2aec79f750f3f0e00ef2d39105cccaa.scope.
Jan 31 04:59:04 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:04 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed6f388b4c7fde71bc0a894795f8ee604182e3111bc630c42b5230c09ff35c6d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:04 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed6f388b4c7fde71bc0a894795f8ee604182e3111bc630c42b5230c09ff35c6d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:04 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed6f388b4c7fde71bc0a894795f8ee604182e3111bc630c42b5230c09ff35c6d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:04 np0005603787 podman[91736]: 2026-01-31 09:59:04.242476699 +0000 UTC m=+0.022632636 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:59:04 np0005603787 podman[91736]: 2026-01-31 09:59:04.360531659 +0000 UTC m=+0.140687606 container init 57c6d57e10b72a01a65d82062a13c289b2aec79f750f3f0e00ef2d39105cccaa (image=quay.io/ceph/ceph:v20, name=amazing_robinson, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:59:04 np0005603787 podman[91736]: 2026-01-31 09:59:04.368747329 +0000 UTC m=+0.148903246 container start 57c6d57e10b72a01a65d82062a13c289b2aec79f750f3f0e00ef2d39105cccaa (image=quay.io/ceph/ceph:v20, name=amazing_robinson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 04:59:04 np0005603787 podman[91736]: 2026-01-31 09:59:04.371816931 +0000 UTC m=+0.151972848 container attach 57c6d57e10b72a01a65d82062a13c289b2aec79f750f3f0e00ef2d39105cccaa (image=quay.io/ceph/ceph:v20, name=amazing_robinson, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:59:04 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 31 04:59:04 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1556644593' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 31 04:59:04 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1556644593' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 31 04:59:04 np0005603787 amazing_robinson[91752]: 
Jan 31 04:59:04 np0005603787 amazing_robinson[91752]: [global]
Jan 31 04:59:04 np0005603787 amazing_robinson[91752]: #011fsid = 962d77ae-dc67-5de8-89d8-3d1670c67b61
Jan 31 04:59:04 np0005603787 amazing_robinson[91752]: #011mon_host = 192.168.122.100
Jan 31 04:59:04 np0005603787 amazing_robinson[91752]: #011rgw_keystone_api_version = 3
Jan 31 04:59:04 np0005603787 systemd[1]: libpod-57c6d57e10b72a01a65d82062a13c289b2aec79f750f3f0e00ef2d39105cccaa.scope: Deactivated successfully.
Jan 31 04:59:04 np0005603787 podman[91736]: 2026-01-31 09:59:04.808397393 +0000 UTC m=+0.588553330 container died 57c6d57e10b72a01a65d82062a13c289b2aec79f750f3f0e00ef2d39105cccaa (image=quay.io/ceph/ceph:v20, name=amazing_robinson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 04:59:04 np0005603787 systemd[1]: var-lib-containers-storage-overlay-ed6f388b4c7fde71bc0a894795f8ee604182e3111bc630c42b5230c09ff35c6d-merged.mount: Deactivated successfully.
Jan 31 04:59:04 np0005603787 podman[91736]: 2026-01-31 09:59:04.849492582 +0000 UTC m=+0.629648509 container remove 57c6d57e10b72a01a65d82062a13c289b2aec79f750f3f0e00ef2d39105cccaa (image=quay.io/ceph/ceph:v20, name=amazing_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 04:59:04 np0005603787 systemd[1]: libpod-conmon-57c6d57e10b72a01a65d82062a13c289b2aec79f750f3f0e00ef2d39105cccaa.scope: Deactivated successfully.
Jan 31 04:59:04 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/1556644593' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 31 04:59:04 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/1556644593' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 31 04:59:05 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v61: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 04:59:05 np0005603787 python3[91863]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:59:05 np0005603787 podman[91883]: 2026-01-31 09:59:05.184714222 +0000 UTC m=+0.040268728 container create f0a814f301551dd2869b14792f33adce3b1862e8c7b23e8a5d3918348e343709 (image=quay.io/ceph/ceph:v20, name=musing_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 04:59:05 np0005603787 systemd[1]: Started libpod-conmon-f0a814f301551dd2869b14792f33adce3b1862e8c7b23e8a5d3918348e343709.scope.
Jan 31 04:59:05 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:05 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14385968a59468d01282fe0ff4aea500f76dfee6f21c2b6c63e457bde525fe08/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:05 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14385968a59468d01282fe0ff4aea500f76dfee6f21c2b6c63e457bde525fe08/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:05 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14385968a59468d01282fe0ff4aea500f76dfee6f21c2b6c63e457bde525fe08/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:05 np0005603787 podman[91883]: 2026-01-31 09:59:05.167567663 +0000 UTC m=+0.023122189 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:59:05 np0005603787 podman[91883]: 2026-01-31 09:59:05.265654178 +0000 UTC m=+0.121208694 container init f0a814f301551dd2869b14792f33adce3b1862e8c7b23e8a5d3918348e343709 (image=quay.io/ceph/ceph:v20, name=musing_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:59:05 np0005603787 podman[91883]: 2026-01-31 09:59:05.272267425 +0000 UTC m=+0.127821931 container start f0a814f301551dd2869b14792f33adce3b1862e8c7b23e8a5d3918348e343709 (image=quay.io/ceph/ceph:v20, name=musing_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:59:05 np0005603787 podman[91883]: 2026-01-31 09:59:05.309934273 +0000 UTC m=+0.165488779 container attach f0a814f301551dd2869b14792f33adce3b1862e8c7b23e8a5d3918348e343709 (image=quay.io/ceph/ceph:v20, name=musing_chandrasekhar, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 04:59:05 np0005603787 podman[91926]: 2026-01-31 09:59:05.314870454 +0000 UTC m=+0.080136775 container exec 1cb6a2ad0c52f65a03512fc45c5f9abf84541c639633c47899a99e7122aa7891 (image=quay.io/ceph/ceph:v20, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mon-compute-0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:59:05 np0005603787 podman[91926]: 2026-01-31 09:59:05.411332946 +0000 UTC m=+0.176599237 container exec_died 1cb6a2ad0c52f65a03512fc45c5f9abf84541c639633c47899a99e7122aa7891 (image=quay.io/ceph/ceph:v20, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mon-compute-0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 04:59:05 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Jan 31 04:59:05 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1881041813' entity='client.admin' 
Jan 31 04:59:05 np0005603787 musing_chandrasekhar[91924]: set ssl_option
Jan 31 04:59:05 np0005603787 systemd[1]: libpod-f0a814f301551dd2869b14792f33adce3b1862e8c7b23e8a5d3918348e343709.scope: Deactivated successfully.
Jan 31 04:59:05 np0005603787 podman[91883]: 2026-01-31 09:59:05.81584827 +0000 UTC m=+0.671402776 container died f0a814f301551dd2869b14792f33adce3b1862e8c7b23e8a5d3918348e343709 (image=quay.io/ceph/ceph:v20, name=musing_chandrasekhar, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True)
Jan 31 04:59:05 np0005603787 systemd[1]: var-lib-containers-storage-overlay-14385968a59468d01282fe0ff4aea500f76dfee6f21c2b6c63e457bde525fe08-merged.mount: Deactivated successfully.
Jan 31 04:59:05 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 04:59:05 np0005603787 podman[91883]: 2026-01-31 09:59:05.867142623 +0000 UTC m=+0.722697129 container remove f0a814f301551dd2869b14792f33adce3b1862e8c7b23e8a5d3918348e343709 (image=quay.io/ceph/ceph:v20, name=musing_chandrasekhar, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 04:59:05 np0005603787 systemd[1]: libpod-conmon-f0a814f301551dd2869b14792f33adce3b1862e8c7b23e8a5d3918348e343709.scope: Deactivated successfully.
Jan 31 04:59:05 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:05 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 04:59:05 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:06 np0005603787 python3[92183]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:59:06 np0005603787 podman[92184]: 2026-01-31 09:59:06.184002751 +0000 UTC m=+0.047491092 container create 624f0ad6f439eb819e8b2fa2a621bf8be70f79f4d4590e46cbb6ac27458cdaa3 (image=quay.io/ceph/ceph:v20, name=inspiring_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 04:59:06 np0005603787 systemd[1]: Started libpod-conmon-624f0ad6f439eb819e8b2fa2a621bf8be70f79f4d4590e46cbb6ac27458cdaa3.scope.
Jan 31 04:59:06 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 04:59:06 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:06 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59bce29a73604c103d237f7588560880fce9e2217a071bf7f497bf791decbb32/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:06 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59bce29a73604c103d237f7588560880fce9e2217a071bf7f497bf791decbb32/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:06 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59bce29a73604c103d237f7588560880fce9e2217a071bf7f497bf791decbb32/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:06 np0005603787 podman[92184]: 2026-01-31 09:59:06.157408709 +0000 UTC m=+0.020897070 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:59:06 np0005603787 podman[92184]: 2026-01-31 09:59:06.287226373 +0000 UTC m=+0.150714734 container init 624f0ad6f439eb819e8b2fa2a621bf8be70f79f4d4590e46cbb6ac27458cdaa3 (image=quay.io/ceph/ceph:v20, name=inspiring_albattani, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:59:06 np0005603787 podman[92184]: 2026-01-31 09:59:06.293439699 +0000 UTC m=+0.156928040 container start 624f0ad6f439eb819e8b2fa2a621bf8be70f79f4d4590e46cbb6ac27458cdaa3 (image=quay.io/ceph/ceph:v20, name=inspiring_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:59:06 np0005603787 podman[92184]: 2026-01-31 09:59:06.323984967 +0000 UTC m=+0.187473338 container attach 624f0ad6f439eb819e8b2fa2a621bf8be70f79f4d4590e46cbb6ac27458cdaa3 (image=quay.io/ceph/ceph:v20, name=inspiring_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 04:59:06 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 04:59:06 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 04:59:06 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 04:59:06 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 04:59:06 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 04:59:06 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:06 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 04:59:06 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 04:59:06 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 04:59:06 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 04:59:06 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 04:59:06 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 04:59:06 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14234 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:59:06 np0005603787 ceph-mgr[75453]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Jan 31 04:59:06 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Jan 31 04:59:06 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 31 04:59:06 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:06 np0005603787 inspiring_albattani[92213]: Scheduled rgw.rgw update...
Jan 31 04:59:06 np0005603787 systemd[1]: libpod-624f0ad6f439eb819e8b2fa2a621bf8be70f79f4d4590e46cbb6ac27458cdaa3.scope: Deactivated successfully.
Jan 31 04:59:06 np0005603787 podman[92184]: 2026-01-31 09:59:06.751453855 +0000 UTC m=+0.614942196 container died 624f0ad6f439eb819e8b2fa2a621bf8be70f79f4d4590e46cbb6ac27458cdaa3 (image=quay.io/ceph/ceph:v20, name=inspiring_albattani, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 31 04:59:06 np0005603787 podman[92315]: 2026-01-31 09:59:06.745586708 +0000 UTC m=+0.019651217 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:59:07 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v62: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 04:59:07 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/1881041813' entity='client.admin' 
Jan 31 04:59:07 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:07 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:07 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 04:59:07 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:07 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 04:59:07 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:07 np0005603787 podman[92315]: 2026-01-31 09:59:07.327192331 +0000 UTC m=+0.601256820 container create 6db43c4a4316e675170969ca33c4e1b112d32a1d34dfe96531656306c819e785 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:59:07 np0005603787 systemd[1]: Started libpod-conmon-6db43c4a4316e675170969ca33c4e1b112d32a1d34dfe96531656306c819e785.scope.
Jan 31 04:59:07 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:07 np0005603787 podman[92315]: 2026-01-31 09:59:07.427237978 +0000 UTC m=+0.701302487 container init 6db43c4a4316e675170969ca33c4e1b112d32a1d34dfe96531656306c819e785 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_taussig, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 04:59:07 np0005603787 podman[92315]: 2026-01-31 09:59:07.432146079 +0000 UTC m=+0.706210568 container start 6db43c4a4316e675170969ca33c4e1b112d32a1d34dfe96531656306c819e785 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_taussig, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:59:07 np0005603787 wizardly_taussig[92346]: 167 167
Jan 31 04:59:07 np0005603787 systemd[1]: libpod-6db43c4a4316e675170969ca33c4e1b112d32a1d34dfe96531656306c819e785.scope: Deactivated successfully.
Jan 31 04:59:07 np0005603787 podman[92315]: 2026-01-31 09:59:07.441274433 +0000 UTC m=+0.715338922 container attach 6db43c4a4316e675170969ca33c4e1b112d32a1d34dfe96531656306c819e785 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 04:59:07 np0005603787 podman[92315]: 2026-01-31 09:59:07.441684194 +0000 UTC m=+0.715748683 container died 6db43c4a4316e675170969ca33c4e1b112d32a1d34dfe96531656306c819e785 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_taussig, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:59:07 np0005603787 systemd[1]: var-lib-containers-storage-overlay-88dc9fb9871da52aa36cfdc0594c5bb57317525da299dc2c3c5448dece5d5dcc-merged.mount: Deactivated successfully.
Jan 31 04:59:07 np0005603787 podman[92315]: 2026-01-31 09:59:07.543456078 +0000 UTC m=+0.817520567 container remove 6db43c4a4316e675170969ca33c4e1b112d32a1d34dfe96531656306c819e785 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 04:59:07 np0005603787 systemd[1]: libpod-conmon-6db43c4a4316e675170969ca33c4e1b112d32a1d34dfe96531656306c819e785.scope: Deactivated successfully.
Jan 31 04:59:07 np0005603787 systemd[1]: var-lib-containers-storage-overlay-59bce29a73604c103d237f7588560880fce9e2217a071bf7f497bf791decbb32-merged.mount: Deactivated successfully.
Jan 31 04:59:07 np0005603787 podman[92184]: 2026-01-31 09:59:07.672994844 +0000 UTC m=+1.536483185 container remove 624f0ad6f439eb819e8b2fa2a621bf8be70f79f4d4590e46cbb6ac27458cdaa3 (image=quay.io/ceph/ceph:v20, name=inspiring_albattani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 04:59:07 np0005603787 systemd[1]: libpod-conmon-624f0ad6f439eb819e8b2fa2a621bf8be70f79f4d4590e46cbb6ac27458cdaa3.scope: Deactivated successfully.
Jan 31 04:59:07 np0005603787 podman[92371]: 2026-01-31 09:59:07.708541885 +0000 UTC m=+0.075507041 container create 895a04ff18f1d5f213ee560731fa42b976954d340fb974826cde3af3d45c2c8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bell, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 04:59:07 np0005603787 systemd[1]: Started libpod-conmon-895a04ff18f1d5f213ee560731fa42b976954d340fb974826cde3af3d45c2c8e.scope.
Jan 31 04:59:07 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:07 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4f7ebfc87ffe971099fe40f4dc19e085ad33fa16ef0383557bed36c65e48fe6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:07 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4f7ebfc87ffe971099fe40f4dc19e085ad33fa16ef0383557bed36c65e48fe6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:07 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4f7ebfc87ffe971099fe40f4dc19e085ad33fa16ef0383557bed36c65e48fe6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:07 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4f7ebfc87ffe971099fe40f4dc19e085ad33fa16ef0383557bed36c65e48fe6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:07 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4f7ebfc87ffe971099fe40f4dc19e085ad33fa16ef0383557bed36c65e48fe6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:07 np0005603787 podman[92371]: 2026-01-31 09:59:07.690364329 +0000 UTC m=+0.057329505 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:59:07 np0005603787 podman[92371]: 2026-01-31 09:59:07.812698022 +0000 UTC m=+0.179663178 container init 895a04ff18f1d5f213ee560731fa42b976954d340fb974826cde3af3d45c2c8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bell, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:59:07 np0005603787 podman[92371]: 2026-01-31 09:59:07.818514957 +0000 UTC m=+0.185480113 container start 895a04ff18f1d5f213ee560731fa42b976954d340fb974826cde3af3d45c2c8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bell, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 04:59:07 np0005603787 podman[92371]: 2026-01-31 09:59:07.823409279 +0000 UTC m=+0.190374475 container attach 895a04ff18f1d5f213ee560731fa42b976954d340fb974826cde3af3d45c2c8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 04:59:08 np0005603787 laughing_bell[92388]: --> passed data devices: 0 physical, 3 LVM
Jan 31 04:59:08 np0005603787 laughing_bell[92388]: --> All data devices are unavailable
Jan 31 04:59:08 np0005603787 podman[92371]: 2026-01-31 09:59:08.239458731 +0000 UTC m=+0.606423897 container died 895a04ff18f1d5f213ee560731fa42b976954d340fb974826cde3af3d45c2c8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bell, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 31 04:59:08 np0005603787 systemd[1]: libpod-895a04ff18f1d5f213ee560731fa42b976954d340fb974826cde3af3d45c2c8e.scope: Deactivated successfully.
Jan 31 04:59:08 np0005603787 systemd[1]: var-lib-containers-storage-overlay-f4f7ebfc87ffe971099fe40f4dc19e085ad33fa16ef0383557bed36c65e48fe6-merged.mount: Deactivated successfully.
Jan 31 04:59:08 np0005603787 ceph-mon[75160]: Saving service rgw.rgw spec with placement compute-0
Jan 31 04:59:08 np0005603787 podman[92371]: 2026-01-31 09:59:08.328198675 +0000 UTC m=+0.695163831 container remove 895a04ff18f1d5f213ee560731fa42b976954d340fb974826cde3af3d45c2c8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 04:59:08 np0005603787 systemd[1]: libpod-conmon-895a04ff18f1d5f213ee560731fa42b976954d340fb974826cde3af3d45c2c8e.scope: Deactivated successfully.
Jan 31 04:59:08 np0005603787 python3[92514]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 04:59:08 np0005603787 podman[92626]: 2026-01-31 09:59:08.733062649 +0000 UTC m=+0.037963516 container create dbf58d76840d35d19b0318c2333a940553df98cc6e39e43a3f402aa6cad741c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 04:59:08 np0005603787 systemd[1]: Started libpod-conmon-dbf58d76840d35d19b0318c2333a940553df98cc6e39e43a3f402aa6cad741c8.scope.
Jan 31 04:59:08 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:08 np0005603787 podman[92626]: 2026-01-31 09:59:08.71515829 +0000 UTC m=+0.020059187 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:59:08 np0005603787 podman[92626]: 2026-01-31 09:59:08.8186734 +0000 UTC m=+0.123574287 container init dbf58d76840d35d19b0318c2333a940553df98cc6e39e43a3f402aa6cad741c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_varahamihira, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 04:59:08 np0005603787 podman[92626]: 2026-01-31 09:59:08.822850512 +0000 UTC m=+0.127751379 container start dbf58d76840d35d19b0318c2333a940553df98cc6e39e43a3f402aa6cad741c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:59:08 np0005603787 elegant_varahamihira[92647]: 167 167
Jan 31 04:59:08 np0005603787 systemd[1]: libpod-dbf58d76840d35d19b0318c2333a940553df98cc6e39e43a3f402aa6cad741c8.scope: Deactivated successfully.
Jan 31 04:59:08 np0005603787 podman[92626]: 2026-01-31 09:59:08.829780857 +0000 UTC m=+0.134681784 container attach dbf58d76840d35d19b0318c2333a940553df98cc6e39e43a3f402aa6cad741c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_varahamihira, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 04:59:08 np0005603787 podman[92626]: 2026-01-31 09:59:08.830540327 +0000 UTC m=+0.135441214 container died dbf58d76840d35d19b0318c2333a940553df98cc6e39e43a3f402aa6cad741c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 04:59:08 np0005603787 python3[92637]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769853548.2922523-36705-163675788362846/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:59:08 np0005603787 systemd[1]: var-lib-containers-storage-overlay-5931daebb24cf6b18a5d250c848459b6562f127899e31f60444ea723147792b5-merged.mount: Deactivated successfully.
Jan 31 04:59:08 np0005603787 podman[92626]: 2026-01-31 09:59:08.880779572 +0000 UTC m=+0.185680439 container remove dbf58d76840d35d19b0318c2333a940553df98cc6e39e43a3f402aa6cad741c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:59:08 np0005603787 systemd[1]: libpod-conmon-dbf58d76840d35d19b0318c2333a940553df98cc6e39e43a3f402aa6cad741c8.scope: Deactivated successfully.
Jan 31 04:59:08 np0005603787 podman[92695]: 2026-01-31 09:59:08.998559453 +0000 UTC m=+0.037025001 container create 00da23f17f77e7dc8f1a3dd75e9fcdd8cb1e77b207f55b645014361488ee1417 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Jan 31 04:59:09 np0005603787 systemd[1]: Started libpod-conmon-00da23f17f77e7dc8f1a3dd75e9fcdd8cb1e77b207f55b645014361488ee1417.scope.
Jan 31 04:59:09 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v63: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 04:59:09 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:09 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4afef5a7c0e5a92f9825e7b645974db690b5dc01555dd5fd753849f3c6d69dd6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:09 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4afef5a7c0e5a92f9825e7b645974db690b5dc01555dd5fd753849f3c6d69dd6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:09 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4afef5a7c0e5a92f9825e7b645974db690b5dc01555dd5fd753849f3c6d69dd6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:09 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4afef5a7c0e5a92f9825e7b645974db690b5dc01555dd5fd753849f3c6d69dd6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:09 np0005603787 podman[92695]: 2026-01-31 09:59:08.98049062 +0000 UTC m=+0.018956188 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:59:09 np0005603787 podman[92695]: 2026-01-31 09:59:09.087103972 +0000 UTC m=+0.125569540 container init 00da23f17f77e7dc8f1a3dd75e9fcdd8cb1e77b207f55b645014361488ee1417 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_einstein, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:59:09 np0005603787 podman[92695]: 2026-01-31 09:59:09.093216886 +0000 UTC m=+0.131682434 container start 00da23f17f77e7dc8f1a3dd75e9fcdd8cb1e77b207f55b645014361488ee1417 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_einstein, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:59:09 np0005603787 podman[92695]: 2026-01-31 09:59:09.098310772 +0000 UTC m=+0.136776320 container attach 00da23f17f77e7dc8f1a3dd75e9fcdd8cb1e77b207f55b645014361488ee1417 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:59:09 np0005603787 python3[92741]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:59:09 np0005603787 podman[92744]: 2026-01-31 09:59:09.342875366 +0000 UTC m=+0.067459126 container create 40ee93356eb7b5941fa4d381d44f4547f106b84e932c4cfec828d1bd28ee3460 (image=quay.io/ceph/ceph:v20, name=reverent_roentgen, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]: {
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:    "0": [
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:        {
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:            "devices": [
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "/dev/loop3"
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:            ],
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:            "lv_name": "ceph_lv0",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:            "lv_size": "21470642176",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:            "name": "ceph_lv0",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:            "tags": {
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "ceph.cluster_name": "ceph",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "ceph.crush_device_class": "",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "ceph.encrypted": "0",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "ceph.objectstore": "bluestore",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "ceph.osd_id": "0",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "ceph.type": "block",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "ceph.vdo": "0",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "ceph.with_tpm": "0"
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:            },
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:            "type": "block",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:            "vg_name": "ceph_vg0"
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:        }
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:    ],
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:    "1": [
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:        {
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:            "devices": [
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "/dev/loop4"
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:            ],
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:            "lv_name": "ceph_lv1",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:            "lv_size": "21470642176",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:            "name": "ceph_lv1",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:            "tags": {
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "ceph.cluster_name": "ceph",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "ceph.crush_device_class": "",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "ceph.encrypted": "0",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "ceph.objectstore": "bluestore",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "ceph.osd_id": "1",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "ceph.type": "block",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "ceph.vdo": "0",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "ceph.with_tpm": "0"
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:            },
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:            "type": "block",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:            "vg_name": "ceph_vg1"
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:        }
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:    ],
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:    "2": [
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:        {
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:            "devices": [
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "/dev/loop5"
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:            ],
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:            "lv_name": "ceph_lv2",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:            "lv_size": "21470642176",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:            "name": "ceph_lv2",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:            "tags": {
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "ceph.cluster_name": "ceph",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "ceph.crush_device_class": "",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "ceph.encrypted": "0",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "ceph.objectstore": "bluestore",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "ceph.osd_id": "2",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "ceph.type": "block",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "ceph.vdo": "0",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:                "ceph.with_tpm": "0"
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:            },
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:            "type": "block",
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:            "vg_name": "ceph_vg2"
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:        }
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]:    ]
Jan 31 04:59:09 np0005603787 goofy_einstein[92711]: }
Jan 31 04:59:09 np0005603787 systemd[1]: libpod-00da23f17f77e7dc8f1a3dd75e9fcdd8cb1e77b207f55b645014361488ee1417.scope: Deactivated successfully.
Jan 31 04:59:09 np0005603787 podman[92695]: 2026-01-31 09:59:09.37663898 +0000 UTC m=+0.415104528 container died 00da23f17f77e7dc8f1a3dd75e9fcdd8cb1e77b207f55b645014361488ee1417 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_einstein, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 04:59:09 np0005603787 podman[92744]: 2026-01-31 09:59:09.298391295 +0000 UTC m=+0.022975085 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:59:09 np0005603787 systemd[1]: Started libpod-conmon-40ee93356eb7b5941fa4d381d44f4547f106b84e932c4cfec828d1bd28ee3460.scope.
Jan 31 04:59:09 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:09 np0005603787 systemd[1]: var-lib-containers-storage-overlay-4afef5a7c0e5a92f9825e7b645974db690b5dc01555dd5fd753849f3c6d69dd6-merged.mount: Deactivated successfully.
Jan 31 04:59:09 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/945841ef615fd4841251861b6020f525fad993999b86e98e76b6ca907925c956/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:09 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/945841ef615fd4841251861b6020f525fad993999b86e98e76b6ca907925c956/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:09 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/945841ef615fd4841251861b6020f525fad993999b86e98e76b6ca907925c956/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:09 np0005603787 podman[92695]: 2026-01-31 09:59:09.436581204 +0000 UTC m=+0.475046752 container remove 00da23f17f77e7dc8f1a3dd75e9fcdd8cb1e77b207f55b645014361488ee1417 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 04:59:09 np0005603787 podman[92744]: 2026-01-31 09:59:09.443182831 +0000 UTC m=+0.167766611 container init 40ee93356eb7b5941fa4d381d44f4547f106b84e932c4cfec828d1bd28ee3460 (image=quay.io/ceph/ceph:v20, name=reverent_roentgen, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:59:09 np0005603787 podman[92744]: 2026-01-31 09:59:09.448932435 +0000 UTC m=+0.173516195 container start 40ee93356eb7b5941fa4d381d44f4547f106b84e932c4cfec828d1bd28ee3460 (image=quay.io/ceph/ceph:v20, name=reverent_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 04:59:09 np0005603787 podman[92744]: 2026-01-31 09:59:09.452367866 +0000 UTC m=+0.176951636 container attach 40ee93356eb7b5941fa4d381d44f4547f106b84e932c4cfec828d1bd28ee3460 (image=quay.io/ceph/ceph:v20, name=reverent_roentgen, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 04:59:09 np0005603787 systemd[1]: libpod-conmon-00da23f17f77e7dc8f1a3dd75e9fcdd8cb1e77b207f55b645014361488ee1417.scope: Deactivated successfully.
Jan 31 04:59:09 np0005603787 podman[92859]: 2026-01-31 09:59:09.8321819 +0000 UTC m=+0.035491991 container create 35f593bafbd6bc8126e9c03ed2a69b0ffc41e8f14f0d4e908157bf882f72cd92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_leavitt, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 04:59:09 np0005603787 systemd[1]: Started libpod-conmon-35f593bafbd6bc8126e9c03ed2a69b0ffc41e8f14f0d4e908157bf882f72cd92.scope.
Jan 31 04:59:09 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:09 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14236 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:59:09 np0005603787 ceph-mgr[75453]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 31 04:59:09 np0005603787 podman[92859]: 2026-01-31 09:59:09.904819634 +0000 UTC m=+0.108129755 container init 35f593bafbd6bc8126e9c03ed2a69b0ffc41e8f14f0d4e908157bf882f72cd92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:59:09 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Jan 31 04:59:09 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Jan 31 04:59:09 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Jan 31 04:59:09 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Jan 31 04:59:09 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Jan 31 04:59:09 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Jan 31 04:59:09 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Jan 31 04:59:09 np0005603787 podman[92859]: 2026-01-31 09:59:09.909776446 +0000 UTC m=+0.113086547 container start 35f593bafbd6bc8126e9c03ed2a69b0ffc41e8f14f0d4e908157bf882f72cd92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_leavitt, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 04:59:09 np0005603787 ceph-mon[75160]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 31 04:59:09 np0005603787 ceph-mon[75160]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 31 04:59:09 np0005603787 podman[92859]: 2026-01-31 09:59:09.815333039 +0000 UTC m=+0.018643160 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:59:09 np0005603787 jolly_leavitt[92875]: 167 167
Jan 31 04:59:09 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mon-compute-0[75156]: 2026-01-31T09:59:09.908+0000 7f2019441640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 31 04:59:09 np0005603787 systemd[1]: libpod-35f593bafbd6bc8126e9c03ed2a69b0ffc41e8f14f0d4e908157bf882f72cd92.scope: Deactivated successfully.
Jan 31 04:59:09 np0005603787 podman[92859]: 2026-01-31 09:59:09.917216105 +0000 UTC m=+0.120526216 container attach 35f593bafbd6bc8126e9c03ed2a69b0ffc41e8f14f0d4e908157bf882f72cd92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:59:09 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 31 04:59:09 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).mds e2 new map
Jan 31 04:59:09 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).mds e2 print_map#012e2#012btime 2026-01-31T09:59:09:909435+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-31T09:59:09.909032+0000#012modified#0112026-01-31T09:59:09.909032+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 
Jan 31 04:59:09 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Jan 31 04:59:09 np0005603787 podman[92859]: 2026-01-31 09:59:09.918071228 +0000 UTC m=+0.121381319 container died 35f593bafbd6bc8126e9c03ed2a69b0ffc41e8f14f0d4e908157bf882f72cd92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 04:59:09 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Jan 31 04:59:09 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Jan 31 04:59:09 np0005603787 ceph-mgr[75453]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Jan 31 04:59:09 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Jan 31 04:59:09 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 31 04:59:09 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:09 np0005603787 ceph-mgr[75453]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 31 04:59:09 np0005603787 systemd[1]: var-lib-containers-storage-overlay-0c56a7dfe1a60875076b3e1ca3d69dd094e0aea1ad06ad8250b93743ec154eae-merged.mount: Deactivated successfully.
Jan 31 04:59:09 np0005603787 systemd[1]: libpod-40ee93356eb7b5941fa4d381d44f4547f106b84e932c4cfec828d1bd28ee3460.scope: Deactivated successfully.
Jan 31 04:59:09 np0005603787 podman[92859]: 2026-01-31 09:59:09.962426905 +0000 UTC m=+0.165736986 container remove 35f593bafbd6bc8126e9c03ed2a69b0ffc41e8f14f0d4e908157bf882f72cd92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_leavitt, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 04:59:09 np0005603787 systemd[1]: libpod-conmon-35f593bafbd6bc8126e9c03ed2a69b0ffc41e8f14f0d4e908157bf882f72cd92.scope: Deactivated successfully.
Jan 31 04:59:09 np0005603787 podman[92744]: 2026-01-31 09:59:09.969196516 +0000 UTC m=+0.693780296 container died 40ee93356eb7b5941fa4d381d44f4547f106b84e932c4cfec828d1bd28ee3460 (image=quay.io/ceph/ceph:v20, name=reverent_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:59:09 np0005603787 systemd[1]: var-lib-containers-storage-overlay-945841ef615fd4841251861b6020f525fad993999b86e98e76b6ca907925c956-merged.mount: Deactivated successfully.
Jan 31 04:59:10 np0005603787 podman[92744]: 2026-01-31 09:59:10.010098701 +0000 UTC m=+0.734682471 container remove 40ee93356eb7b5941fa4d381d44f4547f106b84e932c4cfec828d1bd28ee3460 (image=quay.io/ceph/ceph:v20, name=reverent_roentgen, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:59:10 np0005603787 systemd[1]: libpod-conmon-40ee93356eb7b5941fa4d381d44f4547f106b84e932c4cfec828d1bd28ee3460.scope: Deactivated successfully.
Jan 31 04:59:10 np0005603787 podman[92916]: 2026-01-31 09:59:10.071306388 +0000 UTC m=+0.033490887 container create aa495113d8f2043a8f80f51861877bb5f395e3ef63a954b7ec8cda92fe392ce3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_sanderson, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 04:59:10 np0005603787 systemd[1]: Started libpod-conmon-aa495113d8f2043a8f80f51861877bb5f395e3ef63a954b7ec8cda92fe392ce3.scope.
Jan 31 04:59:10 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:10 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67064389e341bbf2cad2254513f73eb91bccfb70cef2ce61fa28ea974e03f69f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:10 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67064389e341bbf2cad2254513f73eb91bccfb70cef2ce61fa28ea974e03f69f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:10 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67064389e341bbf2cad2254513f73eb91bccfb70cef2ce61fa28ea974e03f69f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:10 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67064389e341bbf2cad2254513f73eb91bccfb70cef2ce61fa28ea974e03f69f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:10 np0005603787 podman[92916]: 2026-01-31 09:59:10.143267414 +0000 UTC m=+0.105451943 container init aa495113d8f2043a8f80f51861877bb5f395e3ef63a954b7ec8cda92fe392ce3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_sanderson, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:59:10 np0005603787 podman[92916]: 2026-01-31 09:59:10.147954349 +0000 UTC m=+0.110138848 container start aa495113d8f2043a8f80f51861877bb5f395e3ef63a954b7ec8cda92fe392ce3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_sanderson, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 04:59:10 np0005603787 podman[92916]: 2026-01-31 09:59:10.055456384 +0000 UTC m=+0.017640893 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:59:10 np0005603787 podman[92916]: 2026-01-31 09:59:10.154256729 +0000 UTC m=+0.116441228 container attach aa495113d8f2043a8f80f51861877bb5f395e3ef63a954b7ec8cda92fe392ce3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_sanderson, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 04:59:10 np0005603787 python3[92962]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:59:10 np0005603787 podman[92965]: 2026-01-31 09:59:10.348441324 +0000 UTC m=+0.045706804 container create c691ccda8fcce5f1663063dca512771003a98c93cfa6ad2bdb31c8aea3084c23 (image=quay.io/ceph/ceph:v20, name=adoring_khorana, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 31 04:59:10 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Jan 31 04:59:10 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Jan 31 04:59:10 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Jan 31 04:59:10 np0005603787 ceph-mon[75160]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 31 04:59:10 np0005603787 ceph-mon[75160]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 31 04:59:10 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 31 04:59:10 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:10 np0005603787 systemd[1]: Started libpod-conmon-c691ccda8fcce5f1663063dca512771003a98c93cfa6ad2bdb31c8aea3084c23.scope.
Jan 31 04:59:10 np0005603787 podman[92965]: 2026-01-31 09:59:10.320744463 +0000 UTC m=+0.018009933 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:59:10 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:10 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40471b5836a4bc33a76d7171af3443fe56a7c9e646feac5654ab1941087d14f6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:10 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40471b5836a4bc33a76d7171af3443fe56a7c9e646feac5654ab1941087d14f6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:10 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40471b5836a4bc33a76d7171af3443fe56a7c9e646feac5654ab1941087d14f6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:10 np0005603787 podman[92965]: 2026-01-31 09:59:10.438811062 +0000 UTC m=+0.136076552 container init c691ccda8fcce5f1663063dca512771003a98c93cfa6ad2bdb31c8aea3084c23 (image=quay.io/ceph/ceph:v20, name=adoring_khorana, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 04:59:10 np0005603787 podman[92965]: 2026-01-31 09:59:10.444660738 +0000 UTC m=+0.141926198 container start c691ccda8fcce5f1663063dca512771003a98c93cfa6ad2bdb31c8aea3084c23 (image=quay.io/ceph/ceph:v20, name=adoring_khorana, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:59:10 np0005603787 podman[92965]: 2026-01-31 09:59:10.451866461 +0000 UTC m=+0.149131911 container attach c691ccda8fcce5f1663063dca512771003a98c93cfa6ad2bdb31c8aea3084c23 (image=quay.io/ceph/ceph:v20, name=adoring_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 04:59:10 np0005603787 lvm[93074]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 04:59:10 np0005603787 lvm[93074]: VG ceph_vg0 finished
Jan 31 04:59:10 np0005603787 lvm[93077]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 04:59:10 np0005603787 lvm[93077]: VG ceph_vg1 finished
Jan 31 04:59:10 np0005603787 lvm[93079]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 04:59:10 np0005603787 lvm[93079]: VG ceph_vg2 finished
Jan 31 04:59:10 np0005603787 gifted_sanderson[92935]: {}
Jan 31 04:59:10 np0005603787 podman[92916]: 2026-01-31 09:59:10.88493626 +0000 UTC m=+0.847120769 container died aa495113d8f2043a8f80f51861877bb5f395e3ef63a954b7ec8cda92fe392ce3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 04:59:10 np0005603787 systemd[1]: libpod-aa495113d8f2043a8f80f51861877bb5f395e3ef63a954b7ec8cda92fe392ce3.scope: Deactivated successfully.
Jan 31 04:59:10 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14238 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:59:10 np0005603787 ceph-mgr[75453]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Jan 31 04:59:10 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Jan 31 04:59:10 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 31 04:59:10 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:10 np0005603787 adoring_khorana[92990]: Scheduled mds.cephfs update...
Jan 31 04:59:10 np0005603787 systemd[1]: var-lib-containers-storage-overlay-67064389e341bbf2cad2254513f73eb91bccfb70cef2ce61fa28ea974e03f69f-merged.mount: Deactivated successfully.
Jan 31 04:59:10 np0005603787 systemd[1]: libpod-c691ccda8fcce5f1663063dca512771003a98c93cfa6ad2bdb31c8aea3084c23.scope: Deactivated successfully.
Jan 31 04:59:10 np0005603787 podman[92916]: 2026-01-31 09:59:10.944796471 +0000 UTC m=+0.906980970 container remove aa495113d8f2043a8f80f51861877bb5f395e3ef63a954b7ec8cda92fe392ce3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 04:59:10 np0005603787 podman[92965]: 2026-01-31 09:59:10.946904958 +0000 UTC m=+0.644170418 container died c691ccda8fcce5f1663063dca512771003a98c93cfa6ad2bdb31c8aea3084c23 (image=quay.io/ceph/ceph:v20, name=adoring_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 04:59:10 np0005603787 systemd[1]: libpod-conmon-aa495113d8f2043a8f80f51861877bb5f395e3ef63a954b7ec8cda92fe392ce3.scope: Deactivated successfully.
Jan 31 04:59:10 np0005603787 systemd[1]: var-lib-containers-storage-overlay-40471b5836a4bc33a76d7171af3443fe56a7c9e646feac5654ab1941087d14f6-merged.mount: Deactivated successfully.
Jan 31 04:59:10 np0005603787 podman[92965]: 2026-01-31 09:59:10.987374611 +0000 UTC m=+0.684640191 container remove c691ccda8fcce5f1663063dca512771003a98c93cfa6ad2bdb31c8aea3084c23 (image=quay.io/ceph/ceph:v20, name=adoring_khorana, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:59:11 np0005603787 systemd[1]: libpod-conmon-c691ccda8fcce5f1663063dca512771003a98c93cfa6ad2bdb31c8aea3084c23.scope: Deactivated successfully.
Jan 31 04:59:11 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 04:59:11 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:11 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 04:59:11 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:11 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 04:59:11 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 04:59:11 np0005603787 ceph-mon[75160]: Saving service mds.cephfs spec with placement compute-0
Jan 31 04:59:11 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:11 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:11 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:11 np0005603787 podman[93230]: 2026-01-31 09:59:11.535474816 +0000 UTC m=+0.060616702 container exec 1cb6a2ad0c52f65a03512fc45c5f9abf84541c639633c47899a99e7122aa7891 (image=quay.io/ceph/ceph:v20, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:59:11 np0005603787 podman[93230]: 2026-01-31 09:59:11.673534651 +0000 UTC m=+0.198676537 container exec_died 1cb6a2ad0c52f65a03512fc45c5f9abf84541c639633c47899a99e7122aa7891 (image=quay.io/ceph/ceph:v20, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:59:11 np0005603787 python3[93380]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 04:59:12 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 04:59:12 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:12 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 04:59:12 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:12 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 04:59:12 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 04:59:12 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 04:59:12 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 04:59:12 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 04:59:12 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:12 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 04:59:12 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 04:59:12 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 04:59:12 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 04:59:12 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 04:59:12 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 04:59:12 np0005603787 python3[93532]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769853551.6881115-36753-20987179292810/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=6238d334966e291f00ca4a59110821f30ba4f9b5 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 04:59:12 np0005603787 ceph-mon[75160]: Saving service mds.cephfs spec with placement compute-0
Jan 31 04:59:12 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:12 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:12 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 04:59:12 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:12 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 04:59:12 np0005603787 podman[93619]: 2026-01-31 09:59:12.448921639 +0000 UTC m=+0.033940480 container create 14b88fa0fb35102c8ecf377b5999a9ce3a710261fad6cfaab1935612ad53cbba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_lalande, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 04:59:12 np0005603787 systemd[1]: Started libpod-conmon-14b88fa0fb35102c8ecf377b5999a9ce3a710261fad6cfaab1935612ad53cbba.scope.
Jan 31 04:59:12 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:12 np0005603787 podman[93619]: 2026-01-31 09:59:12.526278288 +0000 UTC m=+0.111297129 container init 14b88fa0fb35102c8ecf377b5999a9ce3a710261fad6cfaab1935612ad53cbba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_lalande, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:59:12 np0005603787 podman[93619]: 2026-01-31 09:59:12.432773576 +0000 UTC m=+0.017792417 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:59:12 np0005603787 podman[93619]: 2026-01-31 09:59:12.532641128 +0000 UTC m=+0.117659949 container start 14b88fa0fb35102c8ecf377b5999a9ce3a710261fad6cfaab1935612ad53cbba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_lalande, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:59:12 np0005603787 zen_lalande[93661]: 167 167
Jan 31 04:59:12 np0005603787 systemd[1]: libpod-14b88fa0fb35102c8ecf377b5999a9ce3a710261fad6cfaab1935612ad53cbba.scope: Deactivated successfully.
Jan 31 04:59:12 np0005603787 conmon[93661]: conmon 14b88fa0fb35102c8ecf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-14b88fa0fb35102c8ecf377b5999a9ce3a710261fad6cfaab1935612ad53cbba.scope/container/memory.events
Jan 31 04:59:12 np0005603787 podman[93619]: 2026-01-31 09:59:12.538526576 +0000 UTC m=+0.123545417 container attach 14b88fa0fb35102c8ecf377b5999a9ce3a710261fad6cfaab1935612ad53cbba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:59:12 np0005603787 podman[93619]: 2026-01-31 09:59:12.539889223 +0000 UTC m=+0.124908054 container died 14b88fa0fb35102c8ecf377b5999a9ce3a710261fad6cfaab1935612ad53cbba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:59:12 np0005603787 systemd[1]: var-lib-containers-storage-overlay-4fe4894a4226ba945581465dde71956d11c86099c85cb29e9d11d99e71259579-merged.mount: Deactivated successfully.
Jan 31 04:59:12 np0005603787 podman[93619]: 2026-01-31 09:59:12.586356546 +0000 UTC m=+0.171375367 container remove 14b88fa0fb35102c8ecf377b5999a9ce3a710261fad6cfaab1935612ad53cbba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_lalande, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:59:12 np0005603787 systemd[1]: libpod-conmon-14b88fa0fb35102c8ecf377b5999a9ce3a710261fad6cfaab1935612ad53cbba.scope: Deactivated successfully.
Jan 31 04:59:12 np0005603787 python3[93663]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:59:12 np0005603787 podman[93681]: 2026-01-31 09:59:12.666758957 +0000 UTC m=+0.036902988 container create 905b2cd83b44c39ed1ac7818b4fb4e7bf1dea71cf2a88d93bfeb1e22ef5c6612 (image=quay.io/ceph/ceph:v20, name=infallible_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 04:59:12 np0005603787 systemd[1]: Started libpod-conmon-905b2cd83b44c39ed1ac7818b4fb4e7bf1dea71cf2a88d93bfeb1e22ef5c6612.scope.
Jan 31 04:59:12 np0005603787 podman[93697]: 2026-01-31 09:59:12.712494071 +0000 UTC m=+0.052046223 container create db87991f4355d0c339a8f030f8f0c33977bf34a90d0734da66409b80fd656072 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_dewdney, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True)
Jan 31 04:59:12 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:12 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6b7f64d4d7e4a03aa19660c0ae96528eb97e7351ecaa7f014d49a77f41a7247/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:12 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6b7f64d4d7e4a03aa19660c0ae96528eb97e7351ecaa7f014d49a77f41a7247/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:12 np0005603787 systemd[1]: Started libpod-conmon-db87991f4355d0c339a8f030f8f0c33977bf34a90d0734da66409b80fd656072.scope.
Jan 31 04:59:12 np0005603787 podman[93681]: 2026-01-31 09:59:12.738904048 +0000 UTC m=+0.109048109 container init 905b2cd83b44c39ed1ac7818b4fb4e7bf1dea71cf2a88d93bfeb1e22ef5c6612 (image=quay.io/ceph/ceph:v20, name=infallible_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 04:59:12 np0005603787 podman[93681]: 2026-01-31 09:59:12.743733587 +0000 UTC m=+0.113877628 container start 905b2cd83b44c39ed1ac7818b4fb4e7bf1dea71cf2a88d93bfeb1e22ef5c6612 (image=quay.io/ceph/ceph:v20, name=infallible_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True)
Jan 31 04:59:12 np0005603787 podman[93681]: 2026-01-31 09:59:12.651366665 +0000 UTC m=+0.021510726 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:59:12 np0005603787 podman[93681]: 2026-01-31 09:59:12.749267535 +0000 UTC m=+0.119411596 container attach 905b2cd83b44c39ed1ac7818b4fb4e7bf1dea71cf2a88d93bfeb1e22ef5c6612 (image=quay.io/ceph/ceph:v20, name=infallible_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 31 04:59:12 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:12 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/147ae24037248b4ea42ea489ce429060dfc27561a67811be4acd67bde7be6723/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:12 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/147ae24037248b4ea42ea489ce429060dfc27561a67811be4acd67bde7be6723/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:12 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/147ae24037248b4ea42ea489ce429060dfc27561a67811be4acd67bde7be6723/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:12 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/147ae24037248b4ea42ea489ce429060dfc27561a67811be4acd67bde7be6723/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:12 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/147ae24037248b4ea42ea489ce429060dfc27561a67811be4acd67bde7be6723/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:12 np0005603787 podman[93697]: 2026-01-31 09:59:12.767472552 +0000 UTC m=+0.107024704 container init db87991f4355d0c339a8f030f8f0c33977bf34a90d0734da66409b80fd656072 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_dewdney, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 04:59:12 np0005603787 podman[93697]: 2026-01-31 09:59:12.776336849 +0000 UTC m=+0.115888991 container start db87991f4355d0c339a8f030f8f0c33977bf34a90d0734da66409b80fd656072 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:59:12 np0005603787 podman[93697]: 2026-01-31 09:59:12.78010982 +0000 UTC m=+0.119661992 container attach db87991f4355d0c339a8f030f8f0c33977bf34a90d0734da66409b80fd656072 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_dewdney, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:59:12 np0005603787 podman[93697]: 2026-01-31 09:59:12.691547221 +0000 UTC m=+0.031099383 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:59:13 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v66: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 04:59:13 np0005603787 stupefied_dewdney[93718]: --> passed data devices: 0 physical, 3 LVM
Jan 31 04:59:13 np0005603787 stupefied_dewdney[93718]: --> All data devices are unavailable
Jan 31 04:59:13 np0005603787 systemd[1]: libpod-db87991f4355d0c339a8f030f8f0c33977bf34a90d0734da66409b80fd656072.scope: Deactivated successfully.
Jan 31 04:59:13 np0005603787 podman[93697]: 2026-01-31 09:59:13.191472968 +0000 UTC m=+0.531025120 container died db87991f4355d0c339a8f030f8f0c33977bf34a90d0734da66409b80fd656072 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 04:59:13 np0005603787 systemd[1]: var-lib-containers-storage-overlay-147ae24037248b4ea42ea489ce429060dfc27561a67811be4acd67bde7be6723-merged.mount: Deactivated successfully.
Jan 31 04:59:13 np0005603787 podman[93697]: 2026-01-31 09:59:13.246200762 +0000 UTC m=+0.585752904 container remove db87991f4355d0c339a8f030f8f0c33977bf34a90d0734da66409b80fd656072 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:59:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0)
Jan 31 04:59:13 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3491050362' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Jan 31 04:59:13 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3491050362' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 31 04:59:13 np0005603787 systemd[1]: libpod-conmon-db87991f4355d0c339a8f030f8f0c33977bf34a90d0734da66409b80fd656072.scope: Deactivated successfully.
Jan 31 04:59:13 np0005603787 systemd[1]: libpod-905b2cd83b44c39ed1ac7818b4fb4e7bf1dea71cf2a88d93bfeb1e22ef5c6612.scope: Deactivated successfully.
Jan 31 04:59:13 np0005603787 podman[93681]: 2026-01-31 09:59:13.269551807 +0000 UTC m=+0.639695858 container died 905b2cd83b44c39ed1ac7818b4fb4e7bf1dea71cf2a88d93bfeb1e22ef5c6612 (image=quay.io/ceph/ceph:v20, name=infallible_hodgkin, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 04:59:13 np0005603787 podman[93681]: 2026-01-31 09:59:13.309313661 +0000 UTC m=+0.679457712 container remove 905b2cd83b44c39ed1ac7818b4fb4e7bf1dea71cf2a88d93bfeb1e22ef5c6612 (image=quay.io/ceph/ceph:v20, name=infallible_hodgkin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 04:59:13 np0005603787 systemd[1]: libpod-conmon-905b2cd83b44c39ed1ac7818b4fb4e7bf1dea71cf2a88d93bfeb1e22ef5c6612.scope: Deactivated successfully.
Jan 31 04:59:13 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/3491050362' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Jan 31 04:59:13 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/3491050362' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 31 04:59:13 np0005603787 systemd[1]: var-lib-containers-storage-overlay-e6b7f64d4d7e4a03aa19660c0ae96528eb97e7351ecaa7f014d49a77f41a7247-merged.mount: Deactivated successfully.
Jan 31 04:59:13 np0005603787 podman[93846]: 2026-01-31 09:59:13.671383449 +0000 UTC m=+0.046440984 container create 36b73915a38614c07adac283e922a5786c147bc2c05c5b84236f99e63b12f3d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_fermi, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:59:13 np0005603787 systemd[1]: Started libpod-conmon-36b73915a38614c07adac283e922a5786c147bc2c05c5b84236f99e63b12f3d1.scope.
Jan 31 04:59:13 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:13 np0005603787 podman[93846]: 2026-01-31 09:59:13.740639712 +0000 UTC m=+0.115697267 container init 36b73915a38614c07adac283e922a5786c147bc2c05c5b84236f99e63b12f3d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_fermi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 04:59:13 np0005603787 podman[93846]: 2026-01-31 09:59:13.650823569 +0000 UTC m=+0.025881114 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:59:13 np0005603787 podman[93846]: 2026-01-31 09:59:13.746381166 +0000 UTC m=+0.121438691 container start 36b73915a38614c07adac283e922a5786c147bc2c05c5b84236f99e63b12f3d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 04:59:13 np0005603787 modest_fermi[93862]: 167 167
Jan 31 04:59:13 np0005603787 systemd[1]: libpod-36b73915a38614c07adac283e922a5786c147bc2c05c5b84236f99e63b12f3d1.scope: Deactivated successfully.
Jan 31 04:59:13 np0005603787 podman[93846]: 2026-01-31 09:59:13.7502627 +0000 UTC m=+0.125320235 container attach 36b73915a38614c07adac283e922a5786c147bc2c05c5b84236f99e63b12f3d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 04:59:13 np0005603787 podman[93846]: 2026-01-31 09:59:13.750724102 +0000 UTC m=+0.125781627 container died 36b73915a38614c07adac283e922a5786c147bc2c05c5b84236f99e63b12f3d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_fermi, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:59:13 np0005603787 systemd[1]: var-lib-containers-storage-overlay-9177815df2448c32fb9d4cb1a5d6e1c392ae2fa233d2a6cee61c423a740c1f02-merged.mount: Deactivated successfully.
Jan 31 04:59:13 np0005603787 podman[93846]: 2026-01-31 09:59:13.790051274 +0000 UTC m=+0.165108809 container remove 36b73915a38614c07adac283e922a5786c147bc2c05c5b84236f99e63b12f3d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_fermi, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 04:59:13 np0005603787 systemd[1]: libpod-conmon-36b73915a38614c07adac283e922a5786c147bc2c05c5b84236f99e63b12f3d1.scope: Deactivated successfully.
Jan 31 04:59:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:59:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:59:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:59:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:59:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:59:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:59:13 np0005603787 podman[93886]: 2026-01-31 09:59:13.914594507 +0000 UTC m=+0.035815749 container create acdcbf89b8ca1223f1843ae3f11182d32b4de2d811b1dd9e63ed8b586c98b3c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:59:13 np0005603787 systemd[1]: Started libpod-conmon-acdcbf89b8ca1223f1843ae3f11182d32b4de2d811b1dd9e63ed8b586c98b3c6.scope.
Jan 31 04:59:13 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:13 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83df31ce6cf030844cf306e49120578b86a6fe60b0cfdc3d386f410e8dee944a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:13 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83df31ce6cf030844cf306e49120578b86a6fe60b0cfdc3d386f410e8dee944a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:13 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83df31ce6cf030844cf306e49120578b86a6fe60b0cfdc3d386f410e8dee944a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:13 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83df31ce6cf030844cf306e49120578b86a6fe60b0cfdc3d386f410e8dee944a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:13 np0005603787 podman[93886]: 2026-01-31 09:59:13.89865815 +0000 UTC m=+0.019879422 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:59:14 np0005603787 podman[93886]: 2026-01-31 09:59:14.0047769 +0000 UTC m=+0.125998172 container init acdcbf89b8ca1223f1843ae3f11182d32b4de2d811b1dd9e63ed8b586c98b3c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_satoshi, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:59:14 np0005603787 podman[93886]: 2026-01-31 09:59:14.009413284 +0000 UTC m=+0.130634526 container start acdcbf89b8ca1223f1843ae3f11182d32b4de2d811b1dd9e63ed8b586c98b3c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 04:59:14 np0005603787 podman[93886]: 2026-01-31 09:59:14.013526314 +0000 UTC m=+0.134747576 container attach acdcbf89b8ca1223f1843ae3f11182d32b4de2d811b1dd9e63ed8b586c98b3c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_satoshi, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]: {
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:    "0": [
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:        {
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:            "devices": [
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "/dev/loop3"
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:            ],
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:            "lv_name": "ceph_lv0",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:            "lv_size": "21470642176",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:            "name": "ceph_lv0",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:            "tags": {
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "ceph.cluster_name": "ceph",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "ceph.crush_device_class": "",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "ceph.encrypted": "0",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "ceph.objectstore": "bluestore",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "ceph.osd_id": "0",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "ceph.type": "block",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "ceph.vdo": "0",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "ceph.with_tpm": "0"
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:            },
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:            "type": "block",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:            "vg_name": "ceph_vg0"
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:        }
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:    ],
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:    "1": [
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:        {
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:            "devices": [
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "/dev/loop4"
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:            ],
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:            "lv_name": "ceph_lv1",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:            "lv_size": "21470642176",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:            "name": "ceph_lv1",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:            "tags": {
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "ceph.cluster_name": "ceph",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "ceph.crush_device_class": "",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "ceph.encrypted": "0",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "ceph.objectstore": "bluestore",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "ceph.osd_id": "1",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "ceph.type": "block",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "ceph.vdo": "0",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "ceph.with_tpm": "0"
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:            },
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:            "type": "block",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:            "vg_name": "ceph_vg1"
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:        }
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:    ],
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:    "2": [
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:        {
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:            "devices": [
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "/dev/loop5"
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:            ],
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:            "lv_name": "ceph_lv2",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:            "lv_size": "21470642176",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:            "name": "ceph_lv2",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:            "tags": {
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "ceph.cluster_name": "ceph",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "ceph.crush_device_class": "",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "ceph.encrypted": "0",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "ceph.objectstore": "bluestore",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "ceph.osd_id": "2",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "ceph.type": "block",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "ceph.vdo": "0",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:                "ceph.with_tpm": "0"
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:            },
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:            "type": "block",
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:            "vg_name": "ceph_vg2"
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:        }
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]:    ]
Jan 31 04:59:14 np0005603787 epic_satoshi[93902]: }
Jan 31 04:59:14 np0005603787 python3[93932]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:59:14 np0005603787 systemd[1]: libpod-acdcbf89b8ca1223f1843ae3f11182d32b4de2d811b1dd9e63ed8b586c98b3c6.scope: Deactivated successfully.
Jan 31 04:59:14 np0005603787 podman[93886]: 2026-01-31 09:59:14.282772198 +0000 UTC m=+0.403993460 container died acdcbf89b8ca1223f1843ae3f11182d32b4de2d811b1dd9e63ed8b586c98b3c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 04:59:14 np0005603787 systemd[1]: var-lib-containers-storage-overlay-83df31ce6cf030844cf306e49120578b86a6fe60b0cfdc3d386f410e8dee944a-merged.mount: Deactivated successfully.
Jan 31 04:59:14 np0005603787 podman[93938]: 2026-01-31 09:59:14.327764302 +0000 UTC m=+0.047775139 container create 1b5e046c429cc16a2dd63f668091c44e34cf3449199545afa2383a7d987836c9 (image=quay.io/ceph/ceph:v20, name=romantic_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 04:59:14 np0005603787 podman[93886]: 2026-01-31 09:59:14.333849455 +0000 UTC m=+0.455070687 container remove acdcbf89b8ca1223f1843ae3f11182d32b4de2d811b1dd9e63ed8b586c98b3c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 04:59:14 np0005603787 systemd[1]: libpod-conmon-acdcbf89b8ca1223f1843ae3f11182d32b4de2d811b1dd9e63ed8b586c98b3c6.scope: Deactivated successfully.
Jan 31 04:59:14 np0005603787 systemd[1]: Started libpod-conmon-1b5e046c429cc16a2dd63f668091c44e34cf3449199545afa2383a7d987836c9.scope.
Jan 31 04:59:14 np0005603787 podman[93938]: 2026-01-31 09:59:14.303478603 +0000 UTC m=+0.023489460 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:59:14 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:14 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a190bbc1011aa34440f3147d5b1655bc11349386bd0adac1e76e7ddec6edc4b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:14 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a190bbc1011aa34440f3147d5b1655bc11349386bd0adac1e76e7ddec6edc4b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:14 np0005603787 podman[93938]: 2026-01-31 09:59:14.422647621 +0000 UTC m=+0.142658478 container init 1b5e046c429cc16a2dd63f668091c44e34cf3449199545afa2383a7d987836c9 (image=quay.io/ceph/ceph:v20, name=romantic_edison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:59:14 np0005603787 podman[93938]: 2026-01-31 09:59:14.427976234 +0000 UTC m=+0.147987071 container start 1b5e046c429cc16a2dd63f668091c44e34cf3449199545afa2383a7d987836c9 (image=quay.io/ceph/ceph:v20, name=romantic_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:59:14 np0005603787 podman[93938]: 2026-01-31 09:59:14.432251678 +0000 UTC m=+0.152262545 container attach 1b5e046c429cc16a2dd63f668091c44e34cf3449199545afa2383a7d987836c9 (image=quay.io/ceph/ceph:v20, name=romantic_edison, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 04:59:14 np0005603787 podman[94052]: 2026-01-31 09:59:14.709006804 +0000 UTC m=+0.034512605 container create 47724e7ad1a7fc02a979c0fd5f5cbf832c6e637611bf5e95c4ef87c6bb06fcbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:59:14 np0005603787 systemd[1]: Started libpod-conmon-47724e7ad1a7fc02a979c0fd5f5cbf832c6e637611bf5e95c4ef87c6bb06fcbe.scope.
Jan 31 04:59:14 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:14 np0005603787 podman[94052]: 2026-01-31 09:59:14.774134526 +0000 UTC m=+0.099640367 container init 47724e7ad1a7fc02a979c0fd5f5cbf832c6e637611bf5e95c4ef87c6bb06fcbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_morse, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:59:14 np0005603787 podman[94052]: 2026-01-31 09:59:14.778462712 +0000 UTC m=+0.103968513 container start 47724e7ad1a7fc02a979c0fd5f5cbf832c6e637611bf5e95c4ef87c6bb06fcbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 04:59:14 np0005603787 determined_morse[94068]: 167 167
Jan 31 04:59:14 np0005603787 systemd[1]: libpod-47724e7ad1a7fc02a979c0fd5f5cbf832c6e637611bf5e95c4ef87c6bb06fcbe.scope: Deactivated successfully.
Jan 31 04:59:14 np0005603787 podman[94052]: 2026-01-31 09:59:14.783548588 +0000 UTC m=+0.109054399 container attach 47724e7ad1a7fc02a979c0fd5f5cbf832c6e637611bf5e95c4ef87c6bb06fcbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:59:14 np0005603787 podman[94052]: 2026-01-31 09:59:14.783957349 +0000 UTC m=+0.109463150 container died 47724e7ad1a7fc02a979c0fd5f5cbf832c6e637611bf5e95c4ef87c6bb06fcbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 04:59:14 np0005603787 podman[94052]: 2026-01-31 09:59:14.693960461 +0000 UTC m=+0.019466282 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:59:14 np0005603787 systemd[1]: var-lib-containers-storage-overlay-f61bb4be3304f4bf8062e94d294fc5dfefb192d864e4d74eb51a0abfab11823a-merged.mount: Deactivated successfully.
Jan 31 04:59:14 np0005603787 podman[94052]: 2026-01-31 09:59:14.823058055 +0000 UTC m=+0.148563856 container remove 47724e7ad1a7fc02a979c0fd5f5cbf832c6e637611bf5e95c4ef87c6bb06fcbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 04:59:14 np0005603787 systemd[1]: libpod-conmon-47724e7ad1a7fc02a979c0fd5f5cbf832c6e637611bf5e95c4ef87c6bb06fcbe.scope: Deactivated successfully.
Jan 31 04:59:14 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 31 04:59:14 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2242476570' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 31 04:59:14 np0005603787 romantic_edison[93967]: 
Jan 31 04:59:14 np0005603787 romantic_edison[93967]: {"fsid":"962d77ae-dc67-5de8-89d8-3d1670c67b61","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":113,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":30,"num_osds":3,"num_up_osds":3,"osd_up_since":1769853527,"num_in_osds":3,"osd_in_since":1769853497,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83832832,"bytes_avail":64328093696,"bytes_total":64411926528},"fsmap":{"epoch":2,"btime":"2026-01-31T09:59:09:909435+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-31T09:58:45.043527+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Jan 31 04:59:14 np0005603787 systemd[1]: libpod-1b5e046c429cc16a2dd63f668091c44e34cf3449199545afa2383a7d987836c9.scope: Deactivated successfully.
Jan 31 04:59:14 np0005603787 conmon[93967]: conmon 1b5e046c429cc16a2dd6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1b5e046c429cc16a2dd63f668091c44e34cf3449199545afa2383a7d987836c9.scope/container/memory.events
Jan 31 04:59:14 np0005603787 podman[93938]: 2026-01-31 09:59:14.927018047 +0000 UTC m=+0.647028884 container died 1b5e046c429cc16a2dd63f668091c44e34cf3449199545afa2383a7d987836c9 (image=quay.io/ceph/ceph:v20, name=romantic_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:59:14 np0005603787 podman[94095]: 2026-01-31 09:59:14.948012769 +0000 UTC m=+0.036717264 container create aeecb57835f0045db6b32601da5daa27d2f5b928d91689b64108fd063fe2ed43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 04:59:14 np0005603787 systemd[1]: var-lib-containers-storage-overlay-1a190bbc1011aa34440f3147d5b1655bc11349386bd0adac1e76e7ddec6edc4b-merged.mount: Deactivated successfully.
Jan 31 04:59:14 np0005603787 podman[93938]: 2026-01-31 09:59:14.972424762 +0000 UTC m=+0.692435599 container remove 1b5e046c429cc16a2dd63f668091c44e34cf3449199545afa2383a7d987836c9 (image=quay.io/ceph/ceph:v20, name=romantic_edison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:59:14 np0005603787 systemd[1]: libpod-conmon-1b5e046c429cc16a2dd63f668091c44e34cf3449199545afa2383a7d987836c9.scope: Deactivated successfully.
Jan 31 04:59:15 np0005603787 systemd[1]: Started libpod-conmon-aeecb57835f0045db6b32601da5daa27d2f5b928d91689b64108fd063fe2ed43.scope.
Jan 31 04:59:15 np0005603787 podman[94095]: 2026-01-31 09:59:14.931004724 +0000 UTC m=+0.019709249 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:59:15 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:15 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a7f2b2ca72d50bbef78e9951780f8914d02860191d01697fea64bd9f19ca196/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:15 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a7f2b2ca72d50bbef78e9951780f8914d02860191d01697fea64bd9f19ca196/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:15 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a7f2b2ca72d50bbef78e9951780f8914d02860191d01697fea64bd9f19ca196/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:15 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a7f2b2ca72d50bbef78e9951780f8914d02860191d01697fea64bd9f19ca196/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:15 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 04:59:15 np0005603787 podman[94095]: 2026-01-31 09:59:15.054313243 +0000 UTC m=+0.143017758 container init aeecb57835f0045db6b32601da5daa27d2f5b928d91689b64108fd063fe2ed43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_matsumoto, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:59:15 np0005603787 podman[94095]: 2026-01-31 09:59:15.060734915 +0000 UTC m=+0.149439410 container start aeecb57835f0045db6b32601da5daa27d2f5b928d91689b64108fd063fe2ed43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_matsumoto, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:59:15 np0005603787 podman[94095]: 2026-01-31 09:59:15.064486876 +0000 UTC m=+0.153191371 container attach aeecb57835f0045db6b32601da5daa27d2f5b928d91689b64108fd063fe2ed43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_matsumoto, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:59:15 np0005603787 python3[94157]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:59:15 np0005603787 podman[94168]: 2026-01-31 09:59:15.283410384 +0000 UTC m=+0.032598433 container create 122a722582ffe86fab5b7be46e74c301923a5a0b8f5ad91de3d36db8ea48e202 (image=quay.io/ceph/ceph:v20, name=pedantic_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:59:15 np0005603787 systemd[1]: Started libpod-conmon-122a722582ffe86fab5b7be46e74c301923a5a0b8f5ad91de3d36db8ea48e202.scope.
Jan 31 04:59:15 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:15 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac224ed2ac0f68a36acdcbbd783bf2a0a08eb08905e33b6d12f5166fb4ae6f99/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:15 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac224ed2ac0f68a36acdcbbd783bf2a0a08eb08905e33b6d12f5166fb4ae6f99/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:15 np0005603787 podman[94168]: 2026-01-31 09:59:15.355655747 +0000 UTC m=+0.104843796 container init 122a722582ffe86fab5b7be46e74c301923a5a0b8f5ad91de3d36db8ea48e202 (image=quay.io/ceph/ceph:v20, name=pedantic_rubin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 04:59:15 np0005603787 podman[94168]: 2026-01-31 09:59:15.361041371 +0000 UTC m=+0.110229420 container start 122a722582ffe86fab5b7be46e74c301923a5a0b8f5ad91de3d36db8ea48e202 (image=quay.io/ceph/ceph:v20, name=pedantic_rubin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 04:59:15 np0005603787 podman[94168]: 2026-01-31 09:59:15.26794505 +0000 UTC m=+0.017133119 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:59:15 np0005603787 podman[94168]: 2026-01-31 09:59:15.365521261 +0000 UTC m=+0.114709330 container attach 122a722582ffe86fab5b7be46e74c301923a5a0b8f5ad91de3d36db8ea48e202 (image=quay.io/ceph/ceph:v20, name=pedantic_rubin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 04:59:15 np0005603787 lvm[94269]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 04:59:15 np0005603787 lvm[94270]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 04:59:15 np0005603787 lvm[94269]: VG ceph_vg1 finished
Jan 31 04:59:15 np0005603787 lvm[94270]: VG ceph_vg0 finished
Jan 31 04:59:15 np0005603787 lvm[94272]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 04:59:15 np0005603787 lvm[94272]: VG ceph_vg2 finished
Jan 31 04:59:15 np0005603787 intelligent_matsumoto[94127]: {}
Jan 31 04:59:15 np0005603787 systemd[1]: libpod-aeecb57835f0045db6b32601da5daa27d2f5b928d91689b64108fd063fe2ed43.scope: Deactivated successfully.
Jan 31 04:59:15 np0005603787 systemd[1]: libpod-aeecb57835f0045db6b32601da5daa27d2f5b928d91689b64108fd063fe2ed43.scope: Consumed 1.043s CPU time.
Jan 31 04:59:15 np0005603787 podman[94095]: 2026-01-31 09:59:15.768497243 +0000 UTC m=+0.857201738 container died aeecb57835f0045db6b32601da5daa27d2f5b928d91689b64108fd063fe2ed43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_matsumoto, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:59:15 np0005603787 systemd[1]: var-lib-containers-storage-overlay-8a7f2b2ca72d50bbef78e9951780f8914d02860191d01697fea64bd9f19ca196-merged.mount: Deactivated successfully.
Jan 31 04:59:15 np0005603787 podman[94095]: 2026-01-31 09:59:15.824413599 +0000 UTC m=+0.913118094 container remove aeecb57835f0045db6b32601da5daa27d2f5b928d91689b64108fd063fe2ed43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_matsumoto, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:59:15 np0005603787 systemd[1]: libpod-conmon-aeecb57835f0045db6b32601da5daa27d2f5b928d91689b64108fd063fe2ed43.scope: Deactivated successfully.
Jan 31 04:59:15 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 31 04:59:15 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2293975985' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 31 04:59:15 np0005603787 pedantic_rubin[94185]: 
Jan 31 04:59:15 np0005603787 pedantic_rubin[94185]: {"epoch":1,"fsid":"962d77ae-dc67-5de8-89d8-3d1670c67b61","modified":"2026-01-31T09:57:17.140831Z","created":"2026-01-31T09:57:17.140831Z","min_mon_release":20,"min_mon_release_name":"tentacle","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid","tentacle"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Jan 31 04:59:15 np0005603787 pedantic_rubin[94185]: dumped monmap epoch 1
Jan 31 04:59:15 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 04:59:15 np0005603787 systemd[1]: libpod-122a722582ffe86fab5b7be46e74c301923a5a0b8f5ad91de3d36db8ea48e202.scope: Deactivated successfully.
Jan 31 04:59:15 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:15 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 04:59:15 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:15 np0005603787 ceph-mgr[75453]: [progress INFO root] update: starting ev f63cb7a9-ffd9-4536-ae5c-5fff1c14b8ce (Updating rgw.rgw deployment (+1 -> 1))
Jan 31 04:59:15 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.nqlmbk", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 31 04:59:15 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.nqlmbk", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Jan 31 04:59:15 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.nqlmbk", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 31 04:59:15 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Jan 31 04:59:15 np0005603787 podman[94288]: 2026-01-31 09:59:15.938375399 +0000 UTC m=+0.028090602 container died 122a722582ffe86fab5b7be46e74c301923a5a0b8f5ad91de3d36db8ea48e202 (image=quay.io/ceph/ceph:v20, name=pedantic_rubin, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 04:59:15 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:15 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 04:59:15 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 04:59:15 np0005603787 ceph-mgr[75453]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.nqlmbk on compute-0
Jan 31 04:59:15 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.nqlmbk on compute-0
Jan 31 04:59:15 np0005603787 systemd[1]: var-lib-containers-storage-overlay-ac224ed2ac0f68a36acdcbbd783bf2a0a08eb08905e33b6d12f5166fb4ae6f99-merged.mount: Deactivated successfully.
Jan 31 04:59:15 np0005603787 podman[94288]: 2026-01-31 09:59:15.98513039 +0000 UTC m=+0.074845593 container remove 122a722582ffe86fab5b7be46e74c301923a5a0b8f5ad91de3d36db8ea48e202 (image=quay.io/ceph/ceph:v20, name=pedantic_rubin, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 04:59:15 np0005603787 systemd[1]: libpod-conmon-122a722582ffe86fab5b7be46e74c301923a5a0b8f5ad91de3d36db8ea48e202.scope: Deactivated successfully.
Jan 31 04:59:16 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 04:59:16 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:16 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:16 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.nqlmbk", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Jan 31 04:59:16 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.nqlmbk", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 31 04:59:16 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:16 np0005603787 podman[94417]: 2026-01-31 09:59:16.442201321 +0000 UTC m=+0.040286759 container create 3b10a2c85a635c1c0f7b267fae1f28c1c0c1911830524cb8af78de136836f80b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 04:59:16 np0005603787 systemd[1]: Started libpod-conmon-3b10a2c85a635c1c0f7b267fae1f28c1c0c1911830524cb8af78de136836f80b.scope.
Jan 31 04:59:16 np0005603787 python3[94404]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:59:16 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:16 np0005603787 podman[94417]: 2026-01-31 09:59:16.508470534 +0000 UTC m=+0.106555992 container init 3b10a2c85a635c1c0f7b267fae1f28c1c0c1911830524cb8af78de136836f80b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_ardinghelli, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:59:16 np0005603787 podman[94417]: 2026-01-31 09:59:16.514992579 +0000 UTC m=+0.113078017 container start 3b10a2c85a635c1c0f7b267fae1f28c1c0c1911830524cb8af78de136836f80b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_ardinghelli, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 04:59:16 np0005603787 systemd[1]: libpod-3b10a2c85a635c1c0f7b267fae1f28c1c0c1911830524cb8af78de136836f80b.scope: Deactivated successfully.
Jan 31 04:59:16 np0005603787 intelligent_ardinghelli[94433]: 167 167
Jan 31 04:59:16 np0005603787 conmon[94433]: conmon 3b10a2c85a635c1c0f7b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3b10a2c85a635c1c0f7b267fae1f28c1c0c1911830524cb8af78de136836f80b.scope/container/memory.events
Jan 31 04:59:16 np0005603787 podman[94417]: 2026-01-31 09:59:16.518723148 +0000 UTC m=+0.116808606 container attach 3b10a2c85a635c1c0f7b267fae1f28c1c0c1911830524cb8af78de136836f80b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_ardinghelli, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:59:16 np0005603787 podman[94417]: 2026-01-31 09:59:16.522294354 +0000 UTC m=+0.120379792 container died 3b10a2c85a635c1c0f7b267fae1f28c1c0c1911830524cb8af78de136836f80b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 04:59:16 np0005603787 podman[94417]: 2026-01-31 09:59:16.425753931 +0000 UTC m=+0.023839369 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:59:16 np0005603787 podman[94436]: 2026-01-31 09:59:16.545874725 +0000 UTC m=+0.048306204 container create e4cfce6deff0556f72be514b7dfa4a71fc65563a4676032c64b57e226cb28cfa (image=quay.io/ceph/ceph:v20, name=hardcore_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 04:59:16 np0005603787 systemd[1]: var-lib-containers-storage-overlay-9e6f2cac8a417eff8e12707581ec3aaeb62b5e899989d575541309651a1558f4-merged.mount: Deactivated successfully.
Jan 31 04:59:16 np0005603787 podman[94417]: 2026-01-31 09:59:16.567453942 +0000 UTC m=+0.165539380 container remove 3b10a2c85a635c1c0f7b267fae1f28c1c0c1911830524cb8af78de136836f80b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_ardinghelli, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 04:59:16 np0005603787 systemd[1]: Started libpod-conmon-e4cfce6deff0556f72be514b7dfa4a71fc65563a4676032c64b57e226cb28cfa.scope.
Jan 31 04:59:16 np0005603787 systemd[1]: libpod-conmon-3b10a2c85a635c1c0f7b267fae1f28c1c0c1911830524cb8af78de136836f80b.scope: Deactivated successfully.
Jan 31 04:59:16 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:16 np0005603787 podman[94436]: 2026-01-31 09:59:16.522778826 +0000 UTC m=+0.025210315 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:59:16 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d45cd51b39f74bc222f4cbf36f450337fc0cc85e6de19b31184351ae606bf147/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:16 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d45cd51b39f74bc222f4cbf36f450337fc0cc85e6de19b31184351ae606bf147/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:16 np0005603787 systemd[1]: Reloading.
Jan 31 04:59:16 np0005603787 podman[94436]: 2026-01-31 09:59:16.629811971 +0000 UTC m=+0.132243480 container init e4cfce6deff0556f72be514b7dfa4a71fc65563a4676032c64b57e226cb28cfa (image=quay.io/ceph/ceph:v20, name=hardcore_banzai, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 04:59:16 np0005603787 podman[94436]: 2026-01-31 09:59:16.634673761 +0000 UTC m=+0.137105240 container start e4cfce6deff0556f72be514b7dfa4a71fc65563a4676032c64b57e226cb28cfa (image=quay.io/ceph/ceph:v20, name=hardcore_banzai, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:59:16 np0005603787 podman[94436]: 2026-01-31 09:59:16.639584022 +0000 UTC m=+0.142015501 container attach e4cfce6deff0556f72be514b7dfa4a71fc65563a4676032c64b57e226cb28cfa (image=quay.io/ceph/ceph:v20, name=hardcore_banzai, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:59:16 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 04:59:16 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:59:16 np0005603787 systemd[1]: Reloading.
Jan 31 04:59:16 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:59:16 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 04:59:17 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v68: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 04:59:17 np0005603787 systemd[1]: Starting Ceph rgw.rgw.compute-0.nqlmbk for 962d77ae-dc67-5de8-89d8-3d1670c67b61...
Jan 31 04:59:17 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Jan 31 04:59:17 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3713494146' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Jan 31 04:59:17 np0005603787 hardcore_banzai[94463]: [client.openstack]
Jan 31 04:59:17 np0005603787 hardcore_banzai[94463]: #011key = AQDd0X1pAAAAABAAHEcKPx5fsne2IvZPnWualw==
Jan 31 04:59:17 np0005603787 hardcore_banzai[94463]: #011caps mgr = "allow *"
Jan 31 04:59:17 np0005603787 hardcore_banzai[94463]: #011caps mon = "profile rbd"
Jan 31 04:59:17 np0005603787 hardcore_banzai[94463]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Jan 31 04:59:17 np0005603787 systemd[1]: libpod-e4cfce6deff0556f72be514b7dfa4a71fc65563a4676032c64b57e226cb28cfa.scope: Deactivated successfully.
Jan 31 04:59:17 np0005603787 podman[94436]: 2026-01-31 09:59:17.201007075 +0000 UTC m=+0.703438564 container died e4cfce6deff0556f72be514b7dfa4a71fc65563a4676032c64b57e226cb28cfa (image=quay.io/ceph/ceph:v20, name=hardcore_banzai, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 04:59:17 np0005603787 systemd[1]: var-lib-containers-storage-overlay-d45cd51b39f74bc222f4cbf36f450337fc0cc85e6de19b31184351ae606bf147-merged.mount: Deactivated successfully.
Jan 31 04:59:17 np0005603787 podman[94436]: 2026-01-31 09:59:17.259699635 +0000 UTC m=+0.762131114 container remove e4cfce6deff0556f72be514b7dfa4a71fc65563a4676032c64b57e226cb28cfa (image=quay.io/ceph/ceph:v20, name=hardcore_banzai, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:59:17 np0005603787 systemd[1]: libpod-conmon-e4cfce6deff0556f72be514b7dfa4a71fc65563a4676032c64b57e226cb28cfa.scope: Deactivated successfully.
Jan 31 04:59:17 np0005603787 podman[94621]: 2026-01-31 09:59:17.387883845 +0000 UTC m=+0.037863964 container create 6b432b38748c749343f9b75ea220bf86822537f05ace6f406b4907adfa6b5b59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-rgw-rgw-compute-0-nqlmbk, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default)
Jan 31 04:59:17 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce57439eb7aef970b75c4d88705ff162d393e5e0d49c9d77e1b4ee8207b87407/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:17 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce57439eb7aef970b75c4d88705ff162d393e5e0d49c9d77e1b4ee8207b87407/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:17 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce57439eb7aef970b75c4d88705ff162d393e5e0d49c9d77e1b4ee8207b87407/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:17 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce57439eb7aef970b75c4d88705ff162d393e5e0d49c9d77e1b4ee8207b87407/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.nqlmbk supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:17 np0005603787 ceph-mon[75160]: Deploying daemon rgw.rgw.compute-0.nqlmbk on compute-0
Jan 31 04:59:17 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/3713494146' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Jan 31 04:59:17 np0005603787 podman[94621]: 2026-01-31 09:59:17.439951978 +0000 UTC m=+0.089932127 container init 6b432b38748c749343f9b75ea220bf86822537f05ace6f406b4907adfa6b5b59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-rgw-rgw-compute-0-nqlmbk, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:59:17 np0005603787 podman[94621]: 2026-01-31 09:59:17.445914567 +0000 UTC m=+0.095894686 container start 6b432b38748c749343f9b75ea220bf86822537f05ace6f406b4907adfa6b5b59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-rgw-rgw-compute-0-nqlmbk, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 04:59:17 np0005603787 bash[94621]: 6b432b38748c749343f9b75ea220bf86822537f05ace6f406b4907adfa6b5b59
Jan 31 04:59:17 np0005603787 podman[94621]: 2026-01-31 09:59:17.369731309 +0000 UTC m=+0.019711458 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:59:17 np0005603787 systemd[1]: Started Ceph rgw.rgw.compute-0.nqlmbk for 962d77ae-dc67-5de8-89d8-3d1670c67b61.
Jan 31 04:59:17 np0005603787 radosgw[94641]: deferred set uid:gid to 167:167 (ceph:ceph)
Jan 31 04:59:17 np0005603787 radosgw[94641]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process radosgw, pid 2
Jan 31 04:59:17 np0005603787 radosgw[94641]: framework: beast
Jan 31 04:59:17 np0005603787 radosgw[94641]: framework conf key: endpoint, val: 192.168.122.100:8082
Jan 31 04:59:17 np0005603787 radosgw[94641]: init_numa not setting numa affinity
Jan 31 04:59:17 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 04:59:17 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:17 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 04:59:17 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:17 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 31 04:59:17 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:17 np0005603787 ceph-mgr[75453]: [progress INFO root] complete: finished ev f63cb7a9-ffd9-4536-ae5c-5fff1c14b8ce (Updating rgw.rgw deployment (+1 -> 1))
Jan 31 04:59:17 np0005603787 ceph-mgr[75453]: [progress INFO root] Completed event f63cb7a9-ffd9-4536-ae5c-5fff1c14b8ce (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Jan 31 04:59:17 np0005603787 ceph-mgr[75453]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Jan 31 04:59:17 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Jan 31 04:59:17 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 31 04:59:17 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:17 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 31 04:59:17 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:17 np0005603787 ceph-mgr[75453]: [progress INFO root] update: starting ev 426d9dd9-c653-4bc8-9e61-8737ddcd788b (Updating mds.cephfs deployment (+1 -> 1))
Jan 31 04:59:17 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.nykocs", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Jan 31 04:59:17 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.nykocs", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Jan 31 04:59:17 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.nykocs", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 31 04:59:17 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 04:59:17 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 04:59:17 np0005603787 ceph-mgr[75453]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.nykocs on compute-0
Jan 31 04:59:17 np0005603787 ceph-mgr[75453]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.nykocs on compute-0
Jan 31 04:59:18 np0005603787 podman[94758]: 2026-01-31 09:59:18.010205187 +0000 UTC m=+0.041616504 container create ca2df16a8e1fdb59d42842065ffb0ca5ce8cdc1f3bb21c95d5badcd0dd9f588e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_allen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 04:59:18 np0005603787 systemd[1]: Started libpod-conmon-ca2df16a8e1fdb59d42842065ffb0ca5ce8cdc1f3bb21c95d5badcd0dd9f588e.scope.
Jan 31 04:59:18 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:18 np0005603787 podman[94758]: 2026-01-31 09:59:17.990997963 +0000 UTC m=+0.022409300 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:59:18 np0005603787 podman[94758]: 2026-01-31 09:59:18.093786063 +0000 UTC m=+0.125197400 container init ca2df16a8e1fdb59d42842065ffb0ca5ce8cdc1f3bb21c95d5badcd0dd9f588e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Jan 31 04:59:18 np0005603787 podman[94758]: 2026-01-31 09:59:18.099741223 +0000 UTC m=+0.131152540 container start ca2df16a8e1fdb59d42842065ffb0ca5ce8cdc1f3bb21c95d5badcd0dd9f588e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_allen, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:59:18 np0005603787 vibrant_allen[94774]: 167 167
Jan 31 04:59:18 np0005603787 systemd[1]: libpod-ca2df16a8e1fdb59d42842065ffb0ca5ce8cdc1f3bb21c95d5badcd0dd9f588e.scope: Deactivated successfully.
Jan 31 04:59:18 np0005603787 podman[94758]: 2026-01-31 09:59:18.104125101 +0000 UTC m=+0.135536438 container attach ca2df16a8e1fdb59d42842065ffb0ca5ce8cdc1f3bb21c95d5badcd0dd9f588e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_allen, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 04:59:18 np0005603787 podman[94758]: 2026-01-31 09:59:18.106131974 +0000 UTC m=+0.137543291 container died ca2df16a8e1fdb59d42842065ffb0ca5ce8cdc1f3bb21c95d5badcd0dd9f588e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_allen, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:59:18 np0005603787 systemd[1]: var-lib-containers-storage-overlay-8ea1bdac1273256e4699a875a3b4eb9a5280e0d197f6df8f9b3795e24068f753-merged.mount: Deactivated successfully.
Jan 31 04:59:18 np0005603787 podman[94758]: 2026-01-31 09:59:18.150764618 +0000 UTC m=+0.182175935 container remove ca2df16a8e1fdb59d42842065ffb0ca5ce8cdc1f3bb21c95d5badcd0dd9f588e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:59:18 np0005603787 systemd[1]: libpod-conmon-ca2df16a8e1fdb59d42842065ffb0ca5ce8cdc1f3bb21c95d5badcd0dd9f588e.scope: Deactivated successfully.
Jan 31 04:59:18 np0005603787 systemd[1]: Reloading.
Jan 31 04:59:18 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:59:18 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 04:59:18 np0005603787 systemd[1]: Reloading.
Jan 31 04:59:18 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Jan 31 04:59:18 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:18 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:18 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:18 np0005603787 ceph-mon[75160]: Saving service rgw.rgw spec with placement compute-0
Jan 31 04:59:18 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:18 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:18 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.nykocs", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Jan 31 04:59:18 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.nykocs", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 31 04:59:18 np0005603787 ceph-mon[75160]: Deploying daemon mds.cephfs.compute-0.nykocs on compute-0
Jan 31 04:59:18 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Jan 31 04:59:18 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Jan 31 04:59:18 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 04:59:18 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Jan 31 04:59:18 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2233105181' entity='client.rgw.rgw.compute-0.nqlmbk' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Jan 31 04:59:18 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 04:59:18 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 31 pg[8.0( empty local-lis/les=0/0 n=0 ec=31/31 lis/c=0/0 les/c/f=0/0/0 sis=31) [1] r=0 lpr=31 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:18 np0005603787 systemd[1]: Starting Ceph mds.cephfs.compute-0.nykocs for 962d77ae-dc67-5de8-89d8-3d1670c67b61...
Jan 31 04:59:18 np0005603787 ceph-mgr[75453]: [progress INFO root] Writing back 4 completed events
Jan 31 04:59:18 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 31 04:59:18 np0005603787 ansible-async_wrapper.py[95016]: Invoked with j24917746861 30 /home/zuul/.ansible/tmp/ansible-tmp-1769853558.1496346-36825-235669534513075/AnsiballZ_command.py _
Jan 31 04:59:18 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:18 np0005603787 ceph-mgr[75453]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Jan 31 04:59:18 np0005603787 ansible-async_wrapper.py[95079]: Starting module and watcher
Jan 31 04:59:18 np0005603787 ansible-async_wrapper.py[95079]: Start watching 95080 (30)
Jan 31 04:59:18 np0005603787 ansible-async_wrapper.py[95080]: Start module (95080)
Jan 31 04:59:18 np0005603787 ansible-async_wrapper.py[95016]: Return async_wrapper task started.
Jan 31 04:59:18 np0005603787 podman[95064]: 2026-01-31 09:59:18.969017653 +0000 UTC m=+0.086141046 container create a02c13f55b6049d4797c4b601bf7b7c53663e5c2e39869cfa38f7f2e0990d888 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mds-cephfs-compute-0-nykocs, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 04:59:19 np0005603787 podman[95064]: 2026-01-31 09:59:18.903324745 +0000 UTC m=+0.020448168 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:59:19 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23c78ebf9a24dd8b679ce4d79981896fd3bb878d1bad795ac1d1e112b9843932/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:19 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23c78ebf9a24dd8b679ce4d79981896fd3bb878d1bad795ac1d1e112b9843932/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:19 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23c78ebf9a24dd8b679ce4d79981896fd3bb878d1bad795ac1d1e112b9843932/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:19 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23c78ebf9a24dd8b679ce4d79981896fd3bb878d1bad795ac1d1e112b9843932/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.nykocs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:19 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v70: 8 pgs: 1 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 04:59:19 np0005603787 python3[95081]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:59:19 np0005603787 podman[95064]: 2026-01-31 09:59:19.196541971 +0000 UTC m=+0.313665384 container init a02c13f55b6049d4797c4b601bf7b7c53663e5c2e39869cfa38f7f2e0990d888 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mds-cephfs-compute-0-nykocs, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:59:19 np0005603787 podman[95064]: 2026-01-31 09:59:19.202834049 +0000 UTC m=+0.319957442 container start a02c13f55b6049d4797c4b601bf7b7c53663e5c2e39869cfa38f7f2e0990d888 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mds-cephfs-compute-0-nykocs, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:59:19 np0005603787 ceph-mds[95101]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 04:59:19 np0005603787 ceph-mds[95101]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mds, pid 2
Jan 31 04:59:19 np0005603787 ceph-mds[95101]: main not setting numa affinity
Jan 31 04:59:19 np0005603787 ceph-mds[95101]: pidfile_write: ignore empty --pid-file
Jan 31 04:59:19 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mds-cephfs-compute-0-nykocs[95085]: starting mds.cephfs.compute-0.nykocs at 
Jan 31 04:59:19 np0005603787 ceph-mds[95101]: mds.cephfs.compute-0.nykocs Updating MDS map to version 2 from mon.0
Jan 31 04:59:19 np0005603787 bash[95064]: a02c13f55b6049d4797c4b601bf7b7c53663e5c2e39869cfa38f7f2e0990d888
Jan 31 04:59:19 np0005603787 systemd[1]: Started Ceph mds.cephfs.compute-0.nykocs for 962d77ae-dc67-5de8-89d8-3d1670c67b61.
Jan 31 04:59:19 np0005603787 podman[95088]: 2026-01-31 09:59:19.529943923 +0000 UTC m=+0.457008081 container create 9bf349dfc7e530f8fd6647b598cd5fd0106bfec6343da307c4aae384080990ac (image=quay.io/ceph/ceph:v20, name=goofy_wiles, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:59:19 np0005603787 podman[95088]: 2026-01-31 09:59:19.442608866 +0000 UTC m=+0.369673034 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:59:19 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Jan 31 04:59:19 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/2233105181' entity='client.rgw.rgw.compute-0.nqlmbk' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Jan 31 04:59:19 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:19 np0005603787 systemd[1]: Started libpod-conmon-9bf349dfc7e530f8fd6647b598cd5fd0106bfec6343da307c4aae384080990ac.scope.
Jan 31 04:59:19 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:19 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/846a1ea4f1a83063e9342c518c9592947465a8184218a0af011e429baef38cda/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:19 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/846a1ea4f1a83063e9342c518c9592947465a8184218a0af011e429baef38cda/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:19 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 04:59:19 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2233105181' entity='client.rgw.rgw.compute-0.nqlmbk' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 31 04:59:19 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Jan 31 04:59:19 np0005603787 podman[95088]: 2026-01-31 09:59:19.699594242 +0000 UTC m=+0.626658420 container init 9bf349dfc7e530f8fd6647b598cd5fd0106bfec6343da307c4aae384080990ac (image=quay.io/ceph/ceph:v20, name=goofy_wiles, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:59:19 np0005603787 podman[95088]: 2026-01-31 09:59:19.708392077 +0000 UTC m=+0.635456225 container start 9bf349dfc7e530f8fd6647b598cd5fd0106bfec6343da307c4aae384080990ac (image=quay.io/ceph/ceph:v20, name=goofy_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:59:19 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Jan 31 04:59:19 np0005603787 podman[95088]: 2026-01-31 09:59:19.743472836 +0000 UTC m=+0.670536984 container attach 9bf349dfc7e530f8fd6647b598cd5fd0106bfec6343da307c4aae384080990ac (image=quay.io/ceph/ceph:v20, name=goofy_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:59:19 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 32 pg[8.0( empty local-lis/les=31/32 n=0 ec=31/31 lis/c=0/0 les/c/f=0/0/0 sis=31) [1] r=0 lpr=31 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:19 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:19 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 04:59:19 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:19 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 31 04:59:19 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:19 np0005603787 ceph-mgr[75453]: [progress INFO root] complete: finished ev 426d9dd9-c653-4bc8-9e61-8737ddcd788b (Updating mds.cephfs deployment (+1 -> 1))
Jan 31 04:59:19 np0005603787 ceph-mgr[75453]: [progress INFO root] Completed event 426d9dd9-c653-4bc8-9e61-8737ddcd788b (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Jan 31 04:59:19 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Jan 31 04:59:19 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:19 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 31 04:59:19 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).mds e3 new map
Jan 31 04:59:19 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).mds e3 print_map#012e3#012btime 2026-01-31T09:59:19:921298+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-31T09:59:09.909032+0000#012modified#0112026-01-31T09:59:09.909032+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.nykocs{-1:14251} state up:standby seq 1 addr [v2:192.168.122.100:6814/2726837788,v1:192.168.122.100:6815/2726837788] compat {c=[1],r=[1],i=[1fff]}]
Jan 31 04:59:19 np0005603787 ceph-mds[95101]: mds.cephfs.compute-0.nykocs Updating MDS map to version 3 from mon.0
Jan 31 04:59:19 np0005603787 ceph-mds[95101]: mds.cephfs.compute-0.nykocs Monitors have assigned me to become a standby
Jan 31 04:59:19 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2726837788,v1:192.168.122.100:6815/2726837788] up:boot
Jan 31 04:59:19 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/2726837788,v1:192.168.122.100:6815/2726837788] as mds.0
Jan 31 04:59:19 np0005603787 ceph-mon[75160]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.nykocs assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 31 04:59:19 np0005603787 ceph-mon[75160]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 31 04:59:19 np0005603787 ceph-mon[75160]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 31 04:59:19 np0005603787 ceph-mon[75160]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 31 04:59:20 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Jan 31 04:59:20 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.nykocs"} v 0)
Jan 31 04:59:20 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.nykocs"} : dispatch
Jan 31 04:59:20 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).mds e3 all = 0
Jan 31 04:59:20 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).mds e4 new map
Jan 31 04:59:20 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).mds e4 print_map#012e4#012btime 2026-01-31T09:59:19:985617+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-31T09:59:09.909032+0000#012modified#0112026-01-31T09:59:19.985612+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14251}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-0.nykocs{0:14251} state up:creating seq 1 addr [v2:192.168.122.100:6814/2726837788,v1:192.168.122.100:6815/2726837788] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Jan 31 04:59:20 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:20 np0005603787 ceph-mds[95101]: mds.cephfs.compute-0.nykocs Updating MDS map to version 4 from mon.0
Jan 31 04:59:20 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.nykocs=up:creating}
Jan 31 04:59:20 np0005603787 ceph-mds[95101]: mds.0.4 handle_mds_map I am now mds.0.4
Jan 31 04:59:20 np0005603787 ceph-mds[95101]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Jan 31 04:59:20 np0005603787 ceph-mds[95101]: mds.0.cache creating system inode with ino:0x1
Jan 31 04:59:20 np0005603787 ceph-mds[95101]: mds.0.cache creating system inode with ino:0x100
Jan 31 04:59:20 np0005603787 ceph-mds[95101]: mds.0.cache creating system inode with ino:0x600
Jan 31 04:59:20 np0005603787 ceph-mds[95101]: mds.0.cache creating system inode with ino:0x601
Jan 31 04:59:20 np0005603787 ceph-mds[95101]: mds.0.cache creating system inode with ino:0x602
Jan 31 04:59:20 np0005603787 ceph-mds[95101]: mds.0.cache creating system inode with ino:0x603
Jan 31 04:59:20 np0005603787 ceph-mds[95101]: mds.0.cache creating system inode with ino:0x604
Jan 31 04:59:20 np0005603787 ceph-mds[95101]: mds.0.cache creating system inode with ino:0x605
Jan 31 04:59:20 np0005603787 ceph-mds[95101]: mds.0.cache creating system inode with ino:0x606
Jan 31 04:59:20 np0005603787 ceph-mds[95101]: mds.0.cache creating system inode with ino:0x607
Jan 31 04:59:20 np0005603787 ceph-mds[95101]: mds.0.cache creating system inode with ino:0x608
Jan 31 04:59:20 np0005603787 ceph-mds[95101]: mds.0.cache creating system inode with ino:0x609
Jan 31 04:59:20 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14253 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 04:59:20 np0005603787 goofy_wiles[95123]: 
Jan 31 04:59:20 np0005603787 goofy_wiles[95123]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 31 04:59:20 np0005603787 systemd[1]: libpod-9bf349dfc7e530f8fd6647b598cd5fd0106bfec6343da307c4aae384080990ac.scope: Deactivated successfully.
Jan 31 04:59:20 np0005603787 podman[95088]: 2026-01-31 09:59:20.165377956 +0000 UTC m=+1.092442134 container died 9bf349dfc7e530f8fd6647b598cd5fd0106bfec6343da307c4aae384080990ac (image=quay.io/ceph/ceph:v20, name=goofy_wiles, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:59:20 np0005603787 python3[95757]: ansible-ansible.legacy.async_status Invoked with jid=j24917746861.95016 mode=status _async_dir=/root/.ansible_async
Jan 31 04:59:20 np0005603787 ceph-mds[95101]: mds.0.4 creating_done
Jan 31 04:59:20 np0005603787 ceph-mon[75160]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.nykocs is now active in filesystem cephfs as rank 0
Jan 31 04:59:20 np0005603787 systemd[1]: var-lib-containers-storage-overlay-846a1ea4f1a83063e9342c518c9592947465a8184218a0af011e429baef38cda-merged.mount: Deactivated successfully.
Jan 31 04:59:20 np0005603787 podman[95088]: 2026-01-31 09:59:20.37445037 +0000 UTC m=+1.301514518 container remove 9bf349dfc7e530f8fd6647b598cd5fd0106bfec6343da307c4aae384080990ac (image=quay.io/ceph/ceph:v20, name=goofy_wiles, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 04:59:20 np0005603787 systemd[1]: libpod-conmon-9bf349dfc7e530f8fd6647b598cd5fd0106bfec6343da307c4aae384080990ac.scope: Deactivated successfully.
Jan 31 04:59:20 np0005603787 ansible-async_wrapper.py[95080]: Module complete (95080)
Jan 31 04:59:20 np0005603787 podman[95899]: 2026-01-31 09:59:20.662765915 +0000 UTC m=+0.055347163 container exec 1cb6a2ad0c52f65a03512fc45c5f9abf84541c639633c47899a99e7122aa7891 (image=quay.io/ceph/ceph:v20, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:59:20 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/2233105181' entity='client.rgw.rgw.compute-0.nqlmbk' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 31 04:59:20 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:20 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:20 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:20 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:20 np0005603787 ceph-mon[75160]: daemon mds.cephfs.compute-0.nykocs assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 31 04:59:20 np0005603787 ceph-mon[75160]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 31 04:59:20 np0005603787 ceph-mon[75160]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 31 04:59:20 np0005603787 ceph-mon[75160]: Cluster is now healthy
Jan 31 04:59:20 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:20 np0005603787 ceph-mon[75160]: daemon mds.cephfs.compute-0.nykocs is now active in filesystem cephfs as rank 0
Jan 31 04:59:20 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Jan 31 04:59:20 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Jan 31 04:59:20 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Jan 31 04:59:20 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Jan 31 04:59:20 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2036557078' entity='client.rgw.rgw.compute-0.nqlmbk' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Jan 31 04:59:20 np0005603787 podman[95899]: 2026-01-31 09:59:20.800483479 +0000 UTC m=+0.193064727 container exec_died 1cb6a2ad0c52f65a03512fc45c5f9abf84541c639633c47899a99e7122aa7891 (image=quay.io/ceph/ceph:v20, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 04:59:21 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v73: 9 pgs: 1 unknown, 8 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 1.2 KiB/s wr, 2 op/s
Jan 31 04:59:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).mds e5 new map
Jan 31 04:59:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).mds e5 print_map#012e5#012btime 2026-01-31T09:59:21:127963+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-31T09:59:09.909032+0000#012modified#0112026-01-31T09:59:21.127960+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14251}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 14251 members: 14251#012[mds.cephfs.compute-0.nykocs{0:14251} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/2726837788,v1:192.168.122.100:6815/2726837788] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Jan 31 04:59:21 np0005603787 ceph-mds[95101]: mds.cephfs.compute-0.nykocs Updating MDS map to version 5 from mon.0
Jan 31 04:59:21 np0005603787 ceph-mds[95101]: mds.0.4 handle_mds_map I am now mds.0.4
Jan 31 04:59:21 np0005603787 ceph-mds[95101]: mds.0.4 handle_mds_map state change up:creating --> up:active
Jan 31 04:59:21 np0005603787 ceph-mds[95101]: mds.0.4 recovery_done -- successful recovery!
Jan 31 04:59:21 np0005603787 ceph-mds[95101]: mds.0.4 active_start
Jan 31 04:59:21 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2726837788,v1:192.168.122.100:6815/2726837788] up:active
Jan 31 04:59:21 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.nykocs=up:active}
Jan 31 04:59:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 04:59:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 04:59:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 04:59:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 04:59:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 04:59:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 04:59:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 04:59:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 04:59:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 04:59:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 04:59:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 04:59:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 04:59:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 04:59:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 04:59:21 np0005603787 python3[96137]: ansible-ansible.legacy.async_status Invoked with jid=j24917746861.95016 mode=status _async_dir=/root/.ansible_async
Jan 31 04:59:21 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 33 pg[9.0( empty local-lis/les=0/0 n=0 ec=33/33 lis/c=0/0 les/c/f=0/0/0 sis=33) [1] r=0 lpr=33 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Jan 31 04:59:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2036557078' entity='client.rgw.rgw.compute-0.nqlmbk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 31 04:59:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Jan 31 04:59:21 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Jan 31 04:59:21 np0005603787 python3[96236]: ansible-ansible.legacy.async_status Invoked with jid=j24917746861.95016 mode=cleanup _async_dir=/root/.ansible_async
Jan 31 04:59:21 np0005603787 podman[96250]: 2026-01-31 09:59:21.788941429 +0000 UTC m=+0.046081555 container create c125da06cd1aba24e5874f858c049332cc32d2fc7e029f6999ec20de4f3d9a9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_kowalevski, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:59:21 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 34 pg[9.0( empty local-lis/les=33/34 n=0 ec=33/33 lis/c=0/0 les/c/f=0/0/0 sis=33) [1] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:21 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/2036557078' entity='client.rgw.rgw.compute-0.nqlmbk' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Jan 31 04:59:21 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:21 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:21 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 04:59:21 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:21 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 04:59:21 np0005603787 systemd[1]: Started libpod-conmon-c125da06cd1aba24e5874f858c049332cc32d2fc7e029f6999ec20de4f3d9a9a.scope.
Jan 31 04:59:21 np0005603787 podman[96250]: 2026-01-31 09:59:21.761239648 +0000 UTC m=+0.018379794 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:59:21 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:21 np0005603787 podman[96250]: 2026-01-31 09:59:21.886658534 +0000 UTC m=+0.143798690 container init c125da06cd1aba24e5874f858c049332cc32d2fc7e029f6999ec20de4f3d9a9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_kowalevski, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:59:21 np0005603787 podman[96250]: 2026-01-31 09:59:21.89175114 +0000 UTC m=+0.148891266 container start c125da06cd1aba24e5874f858c049332cc32d2fc7e029f6999ec20de4f3d9a9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_kowalevski, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 04:59:21 np0005603787 infallible_kowalevski[96268]: 167 167
Jan 31 04:59:21 np0005603787 systemd[1]: libpod-c125da06cd1aba24e5874f858c049332cc32d2fc7e029f6999ec20de4f3d9a9a.scope: Deactivated successfully.
Jan 31 04:59:21 np0005603787 podman[96250]: 2026-01-31 09:59:21.898046048 +0000 UTC m=+0.155186184 container attach c125da06cd1aba24e5874f858c049332cc32d2fc7e029f6999ec20de4f3d9a9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_kowalevski, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 04:59:21 np0005603787 podman[96250]: 2026-01-31 09:59:21.898516651 +0000 UTC m=+0.155656787 container died c125da06cd1aba24e5874f858c049332cc32d2fc7e029f6999ec20de4f3d9a9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_kowalevski, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 04:59:21 np0005603787 systemd[1]: var-lib-containers-storage-overlay-edcc8241d07418d8e32908c21aae0c973696ee0ffc8abfa67d7d2f95279da7e8-merged.mount: Deactivated successfully.
Jan 31 04:59:21 np0005603787 podman[96250]: 2026-01-31 09:59:21.98403538 +0000 UTC m=+0.241175506 container remove c125da06cd1aba24e5874f858c049332cc32d2fc7e029f6999ec20de4f3d9a9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_kowalevski, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 04:59:22 np0005603787 systemd[1]: libpod-conmon-c125da06cd1aba24e5874f858c049332cc32d2fc7e029f6999ec20de4f3d9a9a.scope: Deactivated successfully.
Jan 31 04:59:22 np0005603787 podman[96295]: 2026-01-31 09:59:22.110317668 +0000 UTC m=+0.046020562 container create 361589e3e029744ce058582fc01f1b639cdeee70866b02cb124aa463ee8df71a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:59:22 np0005603787 systemd[1]: Started libpod-conmon-361589e3e029744ce058582fc01f1b639cdeee70866b02cb124aa463ee8df71a.scope.
Jan 31 04:59:22 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:22 np0005603787 podman[96295]: 2026-01-31 09:59:22.088278478 +0000 UTC m=+0.023981392 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:59:22 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f40b9ddbfc1534f90ad4946ccd9165592687c5dee482872d0663bad6b9abd1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:22 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f40b9ddbfc1534f90ad4946ccd9165592687c5dee482872d0663bad6b9abd1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:22 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f40b9ddbfc1534f90ad4946ccd9165592687c5dee482872d0663bad6b9abd1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:22 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f40b9ddbfc1534f90ad4946ccd9165592687c5dee482872d0663bad6b9abd1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:22 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f40b9ddbfc1534f90ad4946ccd9165592687c5dee482872d0663bad6b9abd1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:22 np0005603787 podman[96295]: 2026-01-31 09:59:22.200306016 +0000 UTC m=+0.136008920 container init 361589e3e029744ce058582fc01f1b639cdeee70866b02cb124aa463ee8df71a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_volhard, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:59:22 np0005603787 podman[96295]: 2026-01-31 09:59:22.207867438 +0000 UTC m=+0.143570322 container start 361589e3e029744ce058582fc01f1b639cdeee70866b02cb124aa463ee8df71a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:59:22 np0005603787 podman[96295]: 2026-01-31 09:59:22.214184007 +0000 UTC m=+0.149886931 container attach 361589e3e029744ce058582fc01f1b639cdeee70866b02cb124aa463ee8df71a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_volhard, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:59:22 np0005603787 python3[96339]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:59:22 np0005603787 podman[96342]: 2026-01-31 09:59:22.384035402 +0000 UTC m=+0.042679102 container create c0e6846ef993012e876981e4c3d4f560d140c213b157b1c574aeb9aef10fecc9 (image=quay.io/ceph/ceph:v20, name=charming_einstein, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:59:22 np0005603787 systemd[1]: Started libpod-conmon-c0e6846ef993012e876981e4c3d4f560d140c213b157b1c574aeb9aef10fecc9.scope.
Jan 31 04:59:22 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:22 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/054e84ad7a1584036765253110506be3236602241cf609648144e0419d99a9e0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:22 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/054e84ad7a1584036765253110506be3236602241cf609648144e0419d99a9e0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:22 np0005603787 podman[96342]: 2026-01-31 09:59:22.364842179 +0000 UTC m=+0.023485929 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:59:22 np0005603787 podman[96342]: 2026-01-31 09:59:22.466060167 +0000 UTC m=+0.124703897 container init c0e6846ef993012e876981e4c3d4f560d140c213b157b1c574aeb9aef10fecc9 (image=quay.io/ceph/ceph:v20, name=charming_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 31 04:59:22 np0005603787 podman[96342]: 2026-01-31 09:59:22.472915471 +0000 UTC m=+0.131559171 container start c0e6846ef993012e876981e4c3d4f560d140c213b157b1c574aeb9aef10fecc9 (image=quay.io/ceph/ceph:v20, name=charming_einstein, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:59:22 np0005603787 podman[96342]: 2026-01-31 09:59:22.498481835 +0000 UTC m=+0.157125595 container attach c0e6846ef993012e876981e4c3d4f560d140c213b157b1c574aeb9aef10fecc9 (image=quay.io/ceph/ceph:v20, name=charming_einstein, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:59:22 np0005603787 modest_volhard[96318]: --> passed data devices: 0 physical, 3 LVM
Jan 31 04:59:22 np0005603787 modest_volhard[96318]: --> All data devices are unavailable
Jan 31 04:59:22 np0005603787 systemd[1]: libpod-361589e3e029744ce058582fc01f1b639cdeee70866b02cb124aa463ee8df71a.scope: Deactivated successfully.
Jan 31 04:59:22 np0005603787 podman[96295]: 2026-01-31 09:59:22.61263453 +0000 UTC m=+0.548337424 container died 361589e3e029744ce058582fc01f1b639cdeee70866b02cb124aa463ee8df71a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_volhard, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:59:22 np0005603787 systemd[1]: var-lib-containers-storage-overlay-42f40b9ddbfc1534f90ad4946ccd9165592687c5dee482872d0663bad6b9abd1-merged.mount: Deactivated successfully.
Jan 31 04:59:22 np0005603787 podman[96295]: 2026-01-31 09:59:22.746138732 +0000 UTC m=+0.681841626 container remove 361589e3e029744ce058582fc01f1b639cdeee70866b02cb124aa463ee8df71a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_volhard, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 04:59:22 np0005603787 systemd[1]: libpod-conmon-361589e3e029744ce058582fc01f1b639cdeee70866b02cb124aa463ee8df71a.scope: Deactivated successfully.
Jan 31 04:59:22 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Jan 31 04:59:22 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Jan 31 04:59:22 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Jan 31 04:59:22 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Jan 31 04:59:22 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 35 pg[10.0( empty local-lis/les=0/0 n=0 ec=35/35 lis/c=0/0 les/c/f=0/0/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:22 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2036557078' entity='client.rgw.rgw.compute-0.nqlmbk' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Jan 31 04:59:22 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/2036557078' entity='client.rgw.rgw.compute-0.nqlmbk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 31 04:59:22 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/2036557078' entity='client.rgw.rgw.compute-0.nqlmbk' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Jan 31 04:59:22 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 04:59:22 np0005603787 charming_einstein[96359]: 
Jan 31 04:59:22 np0005603787 charming_einstein[96359]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 31 04:59:22 np0005603787 systemd[1]: libpod-c0e6846ef993012e876981e4c3d4f560d140c213b157b1c574aeb9aef10fecc9.scope: Deactivated successfully.
Jan 31 04:59:22 np0005603787 podman[96342]: 2026-01-31 09:59:22.918137384 +0000 UTC m=+0.576781104 container died c0e6846ef993012e876981e4c3d4f560d140c213b157b1c574aeb9aef10fecc9 (image=quay.io/ceph/ceph:v20, name=charming_einstein, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:59:22 np0005603787 systemd[1]: var-lib-containers-storage-overlay-054e84ad7a1584036765253110506be3236602241cf609648144e0419d99a9e0-merged.mount: Deactivated successfully.
Jan 31 04:59:22 np0005603787 podman[96342]: 2026-01-31 09:59:22.959174252 +0000 UTC m=+0.617817952 container remove c0e6846ef993012e876981e4c3d4f560d140c213b157b1c574aeb9aef10fecc9 (image=quay.io/ceph/ceph:v20, name=charming_einstein, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:59:22 np0005603787 systemd[1]: libpod-conmon-c0e6846ef993012e876981e4c3d4f560d140c213b157b1c574aeb9aef10fecc9.scope: Deactivated successfully.
Jan 31 04:59:23 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v76: 10 pgs: 2 unknown, 8 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Jan 31 04:59:23 np0005603787 podman[96480]: 2026-01-31 09:59:23.132227363 +0000 UTC m=+0.036496198 container create 69a3ba2f1008e9fc3cc42f5087558cceb169a7548a29cc0dca7b7840376432c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_morse, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:59:23 np0005603787 systemd[1]: Started libpod-conmon-69a3ba2f1008e9fc3cc42f5087558cceb169a7548a29cc0dca7b7840376432c9.scope.
Jan 31 04:59:23 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:23 np0005603787 podman[96480]: 2026-01-31 09:59:23.190402699 +0000 UTC m=+0.094671554 container init 69a3ba2f1008e9fc3cc42f5087558cceb169a7548a29cc0dca7b7840376432c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_morse, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:59:23 np0005603787 podman[96480]: 2026-01-31 09:59:23.196855772 +0000 UTC m=+0.101124607 container start 69a3ba2f1008e9fc3cc42f5087558cceb169a7548a29cc0dca7b7840376432c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_morse, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:59:23 np0005603787 hardcore_morse[96496]: 167 167
Jan 31 04:59:23 np0005603787 systemd[1]: libpod-69a3ba2f1008e9fc3cc42f5087558cceb169a7548a29cc0dca7b7840376432c9.scope: Deactivated successfully.
Jan 31 04:59:23 np0005603787 conmon[96496]: conmon 69a3ba2f1008e9fc3cc4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-69a3ba2f1008e9fc3cc42f5087558cceb169a7548a29cc0dca7b7840376432c9.scope/container/memory.events
Jan 31 04:59:23 np0005603787 podman[96480]: 2026-01-31 09:59:23.201549567 +0000 UTC m=+0.105818402 container attach 69a3ba2f1008e9fc3cc42f5087558cceb169a7548a29cc0dca7b7840376432c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Jan 31 04:59:23 np0005603787 podman[96480]: 2026-01-31 09:59:23.202317368 +0000 UTC m=+0.106586203 container died 69a3ba2f1008e9fc3cc42f5087558cceb169a7548a29cc0dca7b7840376432c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_morse, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 04:59:23 np0005603787 podman[96480]: 2026-01-31 09:59:23.115457894 +0000 UTC m=+0.019726769 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:59:23 np0005603787 systemd[1]: var-lib-containers-storage-overlay-fa3279b69f7dbcd5d1d19abc142a743a0116664964289feaabc2efc4f62e32dc-merged.mount: Deactivated successfully.
Jan 31 04:59:23 np0005603787 podman[96480]: 2026-01-31 09:59:23.239756529 +0000 UTC m=+0.144025364 container remove 69a3ba2f1008e9fc3cc42f5087558cceb169a7548a29cc0dca7b7840376432c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_morse, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:59:23 np0005603787 systemd[1]: libpod-conmon-69a3ba2f1008e9fc3cc42f5087558cceb169a7548a29cc0dca7b7840376432c9.scope: Deactivated successfully.
Jan 31 04:59:23 np0005603787 podman[96520]: 2026-01-31 09:59:23.348315745 +0000 UTC m=+0.033105658 container create 22f3814df2b937400df9d5418f1a11429d19117679fcad8d64291342290e6351 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_rosalind, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:59:23 np0005603787 systemd[1]: Started libpod-conmon-22f3814df2b937400df9d5418f1a11429d19117679fcad8d64291342290e6351.scope.
Jan 31 04:59:23 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:23 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d0440ce880bd2a5079432a4cf1a21061e5303ff793782cbd22e53a9ce53f635/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:23 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d0440ce880bd2a5079432a4cf1a21061e5303ff793782cbd22e53a9ce53f635/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:23 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d0440ce880bd2a5079432a4cf1a21061e5303ff793782cbd22e53a9ce53f635/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:23 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d0440ce880bd2a5079432a4cf1a21061e5303ff793782cbd22e53a9ce53f635/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:23 np0005603787 podman[96520]: 2026-01-31 09:59:23.334925266 +0000 UTC m=+0.019715219 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:59:23 np0005603787 podman[96520]: 2026-01-31 09:59:23.553618629 +0000 UTC m=+0.238408582 container init 22f3814df2b937400df9d5418f1a11429d19117679fcad8d64291342290e6351 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_rosalind, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 04:59:23 np0005603787 podman[96520]: 2026-01-31 09:59:23.559160887 +0000 UTC m=+0.243950810 container start 22f3814df2b937400df9d5418f1a11429d19117679fcad8d64291342290e6351 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_rosalind, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:59:23 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]: {
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:    "0": [
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:        {
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:            "devices": [
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "/dev/loop3"
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:            ],
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:            "lv_name": "ceph_lv0",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:            "lv_size": "21470642176",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:            "name": "ceph_lv0",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:            "tags": {
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "ceph.cluster_name": "ceph",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "ceph.crush_device_class": "",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "ceph.encrypted": "0",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "ceph.objectstore": "bluestore",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "ceph.osd_id": "0",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "ceph.type": "block",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "ceph.vdo": "0",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "ceph.with_tpm": "0"
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:            },
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:            "type": "block",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:            "vg_name": "ceph_vg0"
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:        }
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:    ],
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:    "1": [
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:        {
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:            "devices": [
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "/dev/loop4"
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:            ],
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:            "lv_name": "ceph_lv1",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:            "lv_size": "21470642176",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:            "name": "ceph_lv1",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:            "tags": {
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "ceph.cluster_name": "ceph",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "ceph.crush_device_class": "",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "ceph.encrypted": "0",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "ceph.objectstore": "bluestore",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "ceph.osd_id": "1",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "ceph.type": "block",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "ceph.vdo": "0",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "ceph.with_tpm": "0"
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:            },
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:            "type": "block",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:            "vg_name": "ceph_vg1"
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:        }
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:    ],
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:    "2": [
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:        {
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:            "devices": [
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "/dev/loop5"
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:            ],
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:            "lv_name": "ceph_lv2",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:            "lv_size": "21470642176",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:            "name": "ceph_lv2",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:            "tags": {
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "ceph.cluster_name": "ceph",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "ceph.crush_device_class": "",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "ceph.encrypted": "0",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "ceph.objectstore": "bluestore",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "ceph.osd_id": "2",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "ceph.type": "block",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "ceph.vdo": "0",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:                "ceph.with_tpm": "0"
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:            },
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:            "type": "block",
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:            "vg_name": "ceph_vg2"
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:        }
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]:    ]
Jan 31 04:59:23 np0005603787 goofy_rosalind[96537]: }
Jan 31 04:59:23 np0005603787 systemd[1]: libpod-22f3814df2b937400df9d5418f1a11429d19117679fcad8d64291342290e6351.scope: Deactivated successfully.
Jan 31 04:59:23 np0005603787 podman[96520]: 2026-01-31 09:59:23.904050655 +0000 UTC m=+0.588840628 container attach 22f3814df2b937400df9d5418f1a11429d19117679fcad8d64291342290e6351 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:59:23 np0005603787 podman[96520]: 2026-01-31 09:59:23.90536614 +0000 UTC m=+0.590156103 container died 22f3814df2b937400df9d5418f1a11429d19117679fcad8d64291342290e6351 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_rosalind, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:59:23 np0005603787 ceph-mgr[75453]: [progress INFO root] Writing back 5 completed events
Jan 31 04:59:23 np0005603787 ansible-async_wrapper.py[95079]: Done in kid B.
Jan 31 04:59:23 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 31 04:59:23 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2036557078' entity='client.rgw.rgw.compute-0.nqlmbk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 31 04:59:23 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Jan 31 04:59:23 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Jan 31 04:59:23 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 36 pg[10.0( empty local-lis/les=35/36 n=0 ec=35/35 lis/c=0/0 les/c/f=0/0/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:23 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:23 np0005603787 systemd[1]: var-lib-containers-storage-overlay-7d0440ce880bd2a5079432a4cf1a21061e5303ff793782cbd22e53a9ce53f635-merged.mount: Deactivated successfully.
Jan 31 04:59:23 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/2036557078' entity='client.rgw.rgw.compute-0.nqlmbk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 31 04:59:23 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:24 np0005603787 python3[96567]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:59:24 np0005603787 podman[96520]: 2026-01-31 09:59:24.01224357 +0000 UTC m=+0.697033493 container remove 22f3814df2b937400df9d5418f1a11429d19117679fcad8d64291342290e6351 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 04:59:24 np0005603787 systemd[1]: libpod-conmon-22f3814df2b937400df9d5418f1a11429d19117679fcad8d64291342290e6351.scope: Deactivated successfully.
Jan 31 04:59:24 np0005603787 podman[96583]: 2026-01-31 09:59:24.064801707 +0000 UTC m=+0.048936511 container create 566712861ae166560199ddf45ea906376db09c9a7c4800dea6fa1b75a6185149 (image=quay.io/ceph/ceph:v20, name=sad_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:59:24 np0005603787 systemd[1]: Started libpod-conmon-566712861ae166560199ddf45ea906376db09c9a7c4800dea6fa1b75a6185149.scope.
Jan 31 04:59:24 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:24 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/597d0ed9750b44fe4717ae54301fb286d966bf766813da8395af9fca430585f4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:24 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/597d0ed9750b44fe4717ae54301fb286d966bf766813da8395af9fca430585f4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:24 np0005603787 podman[96583]: 2026-01-31 09:59:24.13892305 +0000 UTC m=+0.123057894 container init 566712861ae166560199ddf45ea906376db09c9a7c4800dea6fa1b75a6185149 (image=quay.io/ceph/ceph:v20, name=sad_brahmagupta, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 04:59:24 np0005603787 podman[96583]: 2026-01-31 09:59:24.044205405 +0000 UTC m=+0.028340229 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:59:24 np0005603787 podman[96583]: 2026-01-31 09:59:24.145000982 +0000 UTC m=+0.129135786 container start 566712861ae166560199ddf45ea906376db09c9a7c4800dea6fa1b75a6185149 (image=quay.io/ceph/ceph:v20, name=sad_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:59:24 np0005603787 podman[96583]: 2026-01-31 09:59:24.148809845 +0000 UTC m=+0.132944649 container attach 566712861ae166560199ddf45ea906376db09c9a7c4800dea6fa1b75a6185149 (image=quay.io/ceph/ceph:v20, name=sad_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:59:24 np0005603787 podman[96684]: 2026-01-31 09:59:24.390748699 +0000 UTC m=+0.032738548 container create 517099db74d561084fc076122b5cc283eab79476e373a8ffbe131e4e7a3674e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_matsumoto, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 04:59:24 np0005603787 systemd[1]: Started libpod-conmon-517099db74d561084fc076122b5cc283eab79476e373a8ffbe131e4e7a3674e0.scope.
Jan 31 04:59:24 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:24 np0005603787 podman[96684]: 2026-01-31 09:59:24.44391136 +0000 UTC m=+0.085901239 container init 517099db74d561084fc076122b5cc283eab79476e373a8ffbe131e4e7a3674e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_matsumoto, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Jan 31 04:59:24 np0005603787 podman[96684]: 2026-01-31 09:59:24.448241857 +0000 UTC m=+0.090231706 container start 517099db74d561084fc076122b5cc283eab79476e373a8ffbe131e4e7a3674e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_matsumoto, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 04:59:24 np0005603787 stupefied_matsumoto[96701]: 167 167
Jan 31 04:59:24 np0005603787 systemd[1]: libpod-517099db74d561084fc076122b5cc283eab79476e373a8ffbe131e4e7a3674e0.scope: Deactivated successfully.
Jan 31 04:59:24 np0005603787 podman[96684]: 2026-01-31 09:59:24.453812666 +0000 UTC m=+0.095802545 container attach 517099db74d561084fc076122b5cc283eab79476e373a8ffbe131e4e7a3674e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_matsumoto, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:59:24 np0005603787 podman[96684]: 2026-01-31 09:59:24.454403471 +0000 UTC m=+0.096393310 container died 517099db74d561084fc076122b5cc283eab79476e373a8ffbe131e4e7a3674e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_matsumoto, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:59:24 np0005603787 podman[96684]: 2026-01-31 09:59:24.375775838 +0000 UTC m=+0.017765717 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:59:24 np0005603787 systemd[1]: var-lib-containers-storage-overlay-86e86e51f80d9f9ae4ffd7ec7025550241995f183c06a4dae05590867697a7e3-merged.mount: Deactivated successfully.
Jan 31 04:59:24 np0005603787 podman[96684]: 2026-01-31 09:59:24.491944486 +0000 UTC m=+0.133934335 container remove 517099db74d561084fc076122b5cc283eab79476e373a8ffbe131e4e7a3674e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_matsumoto, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:59:24 np0005603787 systemd[1]: libpod-conmon-517099db74d561084fc076122b5cc283eab79476e373a8ffbe131e4e7a3674e0.scope: Deactivated successfully.
Jan 31 04:59:24 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 04:59:24 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.nqlmbk", "name": "rgw_frontends"} v 0)
Jan 31 04:59:24 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.nqlmbk", "name": "rgw_frontends"} : dispatch
Jan 31 04:59:24 np0005603787 sad_brahmagupta[96623]: 
Jan 31 04:59:24 np0005603787 sad_brahmagupta[96623]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_exit_timeout_secs": 120, "rgw_frontend_port": 8082}}]
Jan 31 04:59:24 np0005603787 systemd[1]: libpod-566712861ae166560199ddf45ea906376db09c9a7c4800dea6fa1b75a6185149.scope: Deactivated successfully.
Jan 31 04:59:24 np0005603787 podman[96724]: 2026-01-31 09:59:24.598471297 +0000 UTC m=+0.039339094 container create ba80021f69266ef04e2f2526d3dd8a2817f6b3195610e47a197c877a1009bff6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 04:59:24 np0005603787 podman[96583]: 2026-01-31 09:59:24.604352704 +0000 UTC m=+0.588487508 container died 566712861ae166560199ddf45ea906376db09c9a7c4800dea6fa1b75a6185149 (image=quay.io/ceph/ceph:v20, name=sad_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:59:24 np0005603787 systemd[1]: Started libpod-conmon-ba80021f69266ef04e2f2526d3dd8a2817f6b3195610e47a197c877a1009bff6.scope.
Jan 31 04:59:24 np0005603787 podman[96583]: 2026-01-31 09:59:24.655825831 +0000 UTC m=+0.639960635 container remove 566712861ae166560199ddf45ea906376db09c9a7c4800dea6fa1b75a6185149 (image=quay.io/ceph/ceph:v20, name=sad_brahmagupta, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 04:59:24 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:24 np0005603787 systemd[1]: libpod-conmon-566712861ae166560199ddf45ea906376db09c9a7c4800dea6fa1b75a6185149.scope: Deactivated successfully.
Jan 31 04:59:24 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fb4cac6dadfa3fe540d48f8faf1ec881a91b67c9cce727555e6d7d15817ead7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:24 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fb4cac6dadfa3fe540d48f8faf1ec881a91b67c9cce727555e6d7d15817ead7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:24 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fb4cac6dadfa3fe540d48f8faf1ec881a91b67c9cce727555e6d7d15817ead7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:24 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fb4cac6dadfa3fe540d48f8faf1ec881a91b67c9cce727555e6d7d15817ead7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:24 np0005603787 podman[96724]: 2026-01-31 09:59:24.581624576 +0000 UTC m=+0.022492403 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:59:24 np0005603787 podman[96724]: 2026-01-31 09:59:24.678806546 +0000 UTC m=+0.119674363 container init ba80021f69266ef04e2f2526d3dd8a2817f6b3195610e47a197c877a1009bff6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_ptolemy, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 04:59:24 np0005603787 podman[96724]: 2026-01-31 09:59:24.683362488 +0000 UTC m=+0.124230285 container start ba80021f69266ef04e2f2526d3dd8a2817f6b3195610e47a197c877a1009bff6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_ptolemy, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 04:59:24 np0005603787 podman[96724]: 2026-01-31 09:59:24.686983775 +0000 UTC m=+0.127851572 container attach ba80021f69266ef04e2f2526d3dd8a2817f6b3195610e47a197c877a1009bff6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_ptolemy, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:59:24 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Jan 31 04:59:24 np0005603787 systemd[1]: var-lib-containers-storage-overlay-597d0ed9750b44fe4717ae54301fb286d966bf766813da8395af9fca430585f4-merged.mount: Deactivated successfully.
Jan 31 04:59:25 np0005603787 ceph-mds[95101]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Jan 31 04:59:25 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mds-cephfs-compute-0-nykocs[95085]: 2026-01-31T09:59:25.011+0000 7fc0d3a7d640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Jan 31 04:59:25 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Jan 31 04:59:25 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Jan 31 04:59:25 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Jan 31 04:59:25 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2036557078' entity='client.rgw.rgw.compute-0.nqlmbk' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Jan 31 04:59:25 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v79: 11 pgs: 1 creating+peering, 2 unknown, 8 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s wr, 11 op/s
Jan 31 04:59:25 np0005603787 lvm[96833]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 04:59:25 np0005603787 lvm[96833]: VG ceph_vg0 finished
Jan 31 04:59:25 np0005603787 lvm[96834]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 04:59:25 np0005603787 lvm[96834]: VG ceph_vg1 finished
Jan 31 04:59:25 np0005603787 lvm[96836]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 04:59:25 np0005603787 lvm[96836]: VG ceph_vg2 finished
Jan 31 04:59:25 np0005603787 lvm[96843]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 04:59:25 np0005603787 lvm[96843]: VG ceph_vg0 finished
Jan 31 04:59:25 np0005603787 determined_ptolemy[96754]: {}
Jan 31 04:59:25 np0005603787 systemd[1]: libpod-ba80021f69266ef04e2f2526d3dd8a2817f6b3195610e47a197c877a1009bff6.scope: Deactivated successfully.
Jan 31 04:59:25 np0005603787 systemd[1]: libpod-ba80021f69266ef04e2f2526d3dd8a2817f6b3195610e47a197c877a1009bff6.scope: Consumed 1.089s CPU time.
Jan 31 04:59:25 np0005603787 podman[96724]: 2026-01-31 09:59:25.434071906 +0000 UTC m=+0.874939703 container died ba80021f69266ef04e2f2526d3dd8a2817f6b3195610e47a197c877a1009bff6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_ptolemy, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:59:25 np0005603787 systemd[1]: var-lib-containers-storage-overlay-0fb4cac6dadfa3fe540d48f8faf1ec881a91b67c9cce727555e6d7d15817ead7-merged.mount: Deactivated successfully.
Jan 31 04:59:25 np0005603787 podman[96724]: 2026-01-31 09:59:25.482482881 +0000 UTC m=+0.923350678 container remove ba80021f69266ef04e2f2526d3dd8a2817f6b3195610e47a197c877a1009bff6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_ptolemy, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 04:59:25 np0005603787 python3[96864]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:59:25 np0005603787 systemd[1]: libpod-conmon-ba80021f69266ef04e2f2526d3dd8a2817f6b3195610e47a197c877a1009bff6.scope: Deactivated successfully.
Jan 31 04:59:25 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 04:59:25 np0005603787 podman[96878]: 2026-01-31 09:59:25.528768649 +0000 UTC m=+0.024894716 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:59:25 np0005603787 podman[96878]: 2026-01-31 09:59:25.69099071 +0000 UTC m=+0.187116747 container create 57fba770ddf8f0c456ffa81d41dfc6a39824de01d302cb29c354824e396206c4 (image=quay.io/ceph/ceph:v20, name=mystifying_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 31 04:59:25 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 37 pg[11.0( empty local-lis/les=0/0 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:25 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:25 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 04:59:25 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:25 np0005603787 systemd[1]: Started libpod-conmon-57fba770ddf8f0c456ffa81d41dfc6a39824de01d302cb29c354824e396206c4.scope.
Jan 31 04:59:25 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:25 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cd8f47f892c123f7c31f855baef88e64d867467c9dde1ac3b904b363e7f6fe3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:25 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cd8f47f892c123f7c31f855baef88e64d867467c9dde1ac3b904b363e7f6fe3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:25 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 04:59:25 np0005603787 podman[96878]: 2026-01-31 09:59:25.833429281 +0000 UTC m=+0.329555338 container init 57fba770ddf8f0c456ffa81d41dfc6a39824de01d302cb29c354824e396206c4 (image=quay.io/ceph/ceph:v20, name=mystifying_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:59:25 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:25 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 04:59:25 np0005603787 podman[96878]: 2026-01-31 09:59:25.839900415 +0000 UTC m=+0.336026442 container start 57fba770ddf8f0c456ffa81d41dfc6a39824de01d302cb29c354824e396206c4 (image=quay.io/ceph/ceph:v20, name=mystifying_ganguly, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:59:25 np0005603787 podman[96878]: 2026-01-31 09:59:25.846219023 +0000 UTC m=+0.342345060 container attach 57fba770ddf8f0c456ffa81d41dfc6a39824de01d302cb29c354824e396206c4 (image=quay.io/ceph/ceph:v20, name=mystifying_ganguly, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 04:59:25 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:26 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Jan 31 04:59:26 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2036557078' entity='client.rgw.rgw.compute-0.nqlmbk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 31 04:59:26 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Jan 31 04:59:26 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Jan 31 04:59:26 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Jan 31 04:59:26 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2036557078' entity='client.rgw.rgw.compute-0.nqlmbk' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Jan 31 04:59:26 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 38 pg[11.0( empty local-lis/les=37/38 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:26 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/2036557078' entity='client.rgw.rgw.compute-0.nqlmbk' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Jan 31 04:59:26 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:26 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:26 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:26 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:26 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 04:59:26 np0005603787 mystifying_ganguly[96914]: 
Jan 31 04:59:26 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 04:59:26 np0005603787 mystifying_ganguly[96914]: [{"container_id": "ca20754edb29", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "0.19%", "created": "2026-01-31T09:58:02.683334Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2026-01-31T09:58:02.740412Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T09:59:21.372846Z", "memory_usage": 7803502, "pending_daemon_config": false, "ports": [], "service_name": "crash", "started": "2026-01-31T09:58:02.576948Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61@crash.compute-0", "version": "20.2.0"}, {"container_id": "a02c13f55b60", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "6.59%", "created": "2026-01-31T09:59:19.384355Z", "daemon_id": "cephfs.compute-0.nykocs", "daemon_name": "mds.cephfs.compute-0.nykocs", "daemon_type": "mds", "events": ["2026-01-31T09:59:19.831976Z daemon:mds.cephfs.compute-0.nykocs [INFO] \"Deployed mds.cephfs.compute-0.nykocs on host 'compute-0'\""], "hostname": "compute-0", "is_active": true, "last_refresh": "2026-01-31T09:59:21.373209Z", "memory_usage": 15749611, "pending_daemon_config": false, "ports": [], "service_name": "mds.cephfs", "started": "2026-01-31T09:59:18.907136Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61@mds.cephfs.compute-0.nykocs", "version": "20.2.0"}, {"container_id": "c0327d95fd7f", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "16.70%", "created": "2026-01-31T09:57:22.770293Z", "daemon_id": "compute-0.mdmqaq", "daemon_name": "mgr.compute-0.mdmqaq", "daemon_type": "mgr", "events": ["2026-01-31T09:58:06.446891Z daemon:mgr.compute-0.mdmqaq [INFO] \"Reconfigured mgr.compute-0.mdmqaq on host 'compute-0'\""], "hostname": "compute-0", "is_active": true, "last_refresh": "2026-01-31T09:59:21.372778Z", "memory_usage": 547776102, "pending_daemon_config": false, "ports": [9283, 8765], "service_name": "mgr", "started": "2026-01-31T09:57:22.659850Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61@mgr.compute-0.mdmqaq", "version": "20.2.0"}, {"container_id": "1cb6a2ad0c52", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "2.57%", "created": "2026-01-31T09:57:19.091197Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2026-01-31T09:58:05.837370Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T09:59:21.372690Z", "memory_request": 2147483648, "memory_usage": 44344279, "pending_daemon_config": false, "ports": [], "service_name": "mon", "started": "2026-01-31T09:57:21.082743Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61@mon.compute-0", "version": "20.2.0"}, {"container_id": "e5b4158e31f5", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.41%", "created": "2026-01-31T09:58:23.487884Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2026-01-31T09:58:23.561824Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T09:59:21.372910Z", "memory_request": 4294967296, "memory_usage": 58237911, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-31T09:58:23.400987Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61@osd.0", "version": "20.2.0"}, {"container_id": "c50175b83e0d", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.47%", "created": "2026-01-31T09:58:27.670995Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2026-01-31T09:58:27.822486Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T09:59:21.372976Z", "memory_request": 4294967296, "memory_usage": 59265515, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-31T09:58:27.467400Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61@osd.1", "version": "20.2.0"}, {"container_id": "1afffe856079", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.62%", "created": "2026-01-31T09:58:36.534229Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2026-01-31T09:58:36.687507Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T09:59:21.373042Z", "memory_request": 4294967296, "memory_usage": 56675532, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-31T09:58:36.309996Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61@osd.2", "version": "20.2.0"}, {"container_id": "6b432b38748c", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688b
Jan 31 04:59:26 np0005603787 systemd[1]: libpod-57fba770ddf8f0c456ffa81d41dfc6a39824de01d302cb29c354824e396206c4.scope: Deactivated successfully.
Jan 31 04:59:26 np0005603787 podman[96878]: 2026-01-31 09:59:26.255452723 +0000 UTC m=+0.751578760 container died 57fba770ddf8f0c456ffa81d41dfc6a39824de01d302cb29c354824e396206c4 (image=quay.io/ceph/ceph:v20, name=mystifying_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:59:26 np0005603787 systemd[1]: var-lib-containers-storage-overlay-4cd8f47f892c123f7c31f855baef88e64d867467c9dde1ac3b904b363e7f6fe3-merged.mount: Deactivated successfully.
Jan 31 04:59:26 np0005603787 podman[96878]: 2026-01-31 09:59:26.504250501 +0000 UTC m=+1.000376528 container remove 57fba770ddf8f0c456ffa81d41dfc6a39824de01d302cb29c354824e396206c4 (image=quay.io/ceph/ceph:v20, name=mystifying_ganguly, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:59:26 np0005603787 rsyslogd[1002]: message too long (8841) with configured size 8096, begin of message is: [{"container_id": "ca20754edb29", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 31 04:59:26 np0005603787 systemd[1]: libpod-conmon-57fba770ddf8f0c456ffa81d41dfc6a39824de01d302cb29c354824e396206c4.scope: Deactivated successfully.
Jan 31 04:59:26 np0005603787 podman[97039]: 2026-01-31 09:59:26.732393606 +0000 UTC m=+0.458717556 container exec 1cb6a2ad0c52f65a03512fc45c5f9abf84541c639633c47899a99e7122aa7891 (image=quay.io/ceph/ceph:v20, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mon-compute-0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:59:26 np0005603787 podman[97039]: 2026-01-31 09:59:26.846509409 +0000 UTC m=+0.572833359 container exec_died 1cb6a2ad0c52f65a03512fc45c5f9abf84541c639633c47899a99e7122aa7891 (image=quay.io/ceph/ceph:v20, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:59:27 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v81: 11 pgs: 1 creating+peering, 1 unknown, 9 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 239 B/s rd, 478 B/s wr, 1 op/s
Jan 31 04:59:27 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Jan 31 04:59:27 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2036557078' entity='client.rgw.rgw.compute-0.nqlmbk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 31 04:59:27 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Jan 31 04:59:27 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Jan 31 04:59:27 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/2036557078' entity='client.rgw.rgw.compute-0.nqlmbk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 31 04:59:27 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/2036557078' entity='client.rgw.rgw.compute-0.nqlmbk' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Jan 31 04:59:27 np0005603787 ceph-mon[75160]: from='client.? 192.168.122.100:0/2036557078' entity='client.rgw.rgw.compute-0.nqlmbk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 31 04:59:27 np0005603787 python3[97190]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:59:27 np0005603787 podman[97231]: 2026-01-31 09:59:27.474335299 +0000 UTC m=+0.039005605 container create af77305a4701ea685860271b6dc50c136ee8e3c14eee95f95ca7df127e0c88ef (image=quay.io/ceph/ceph:v20, name=vibrant_lamarr, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 04:59:27 np0005603787 systemd[1]: Started libpod-conmon-af77305a4701ea685860271b6dc50c136ee8e3c14eee95f95ca7df127e0c88ef.scope.
Jan 31 04:59:27 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:27 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fa30f190a069839dc889c136c307e5cd05a618c33e5d1660a1b856997255054/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:27 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fa30f190a069839dc889c136c307e5cd05a618c33e5d1660a1b856997255054/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:27 np0005603787 podman[97231]: 2026-01-31 09:59:27.454138588 +0000 UTC m=+0.018808924 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:59:27 np0005603787 podman[97231]: 2026-01-31 09:59:27.568537649 +0000 UTC m=+0.133207975 container init af77305a4701ea685860271b6dc50c136ee8e3c14eee95f95ca7df127e0c88ef (image=quay.io/ceph/ceph:v20, name=vibrant_lamarr, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:59:27 np0005603787 podman[97231]: 2026-01-31 09:59:27.574312614 +0000 UTC m=+0.138982920 container start af77305a4701ea685860271b6dc50c136ee8e3c14eee95f95ca7df127e0c88ef (image=quay.io/ceph/ceph:v20, name=vibrant_lamarr, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 04:59:27 np0005603787 podman[97231]: 2026-01-31 09:59:27.589033068 +0000 UTC m=+0.153703404 container attach af77305a4701ea685860271b6dc50c136ee8e3c14eee95f95ca7df127e0c88ef (image=quay.io/ceph/ceph:v20, name=vibrant_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 04:59:27 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 04:59:27 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:27 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 04:59:27 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:27 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 04:59:27 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 04:59:27 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 04:59:27 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 04:59:27 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 04:59:27 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:27 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 04:59:27 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 04:59:27 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 04:59:27 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 04:59:27 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 04:59:27 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 04:59:27 np0005603787 radosgw[94641]: v1 topic migration: starting v1 topic migration..
Jan 31 04:59:27 np0005603787 radosgw[94641]: v1 topic migration: finished v1 topic migration
Jan 31 04:59:27 np0005603787 radosgw[94641]: framework: beast
Jan 31 04:59:27 np0005603787 radosgw[94641]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Jan 31 04:59:27 np0005603787 radosgw[94641]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Jan 31 04:59:27 np0005603787 radosgw[94641]: starting handler: beast
Jan 31 04:59:27 np0005603787 radosgw[94641]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 04:59:27 np0005603787 radosgw[94641]: mgrc service_daemon_register rgw.14256 metadata {arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.nqlmbk,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026,kernel_version=5.14.0-665.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864292,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=13eadcec-d4ef-453e-84d9-a048bc43b96f,zone_name=default,zonegroup_id=986723ed-da16-494a-be19-b99fbfd87e43,zonegroup_name=default}
Jan 31 04:59:28 np0005603787 podman[97399]: 2026-01-31 09:59:28.030395288 +0000 UTC m=+0.037087133 container create 6f40358a3f2b4a60eb0e1e0e0edecbbcced19ab735b7d9648f0f4c4349e37fae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hugle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:59:28 np0005603787 systemd[1]: Started libpod-conmon-6f40358a3f2b4a60eb0e1e0e0edecbbcced19ab735b7d9648f0f4c4349e37fae.scope.
Jan 31 04:59:28 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:28 np0005603787 podman[97399]: 2026-01-31 09:59:28.100479133 +0000 UTC m=+0.107170888 container init 6f40358a3f2b4a60eb0e1e0e0edecbbcced19ab735b7d9648f0f4c4349e37fae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hugle, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:59:28 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 31 04:59:28 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3346136186' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 31 04:59:28 np0005603787 vibrant_lamarr[97281]: 
Jan 31 04:59:28 np0005603787 vibrant_lamarr[97281]: {"fsid":"962d77ae-dc67-5de8-89d8-3d1670c67b61","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":126,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":39,"num_osds":3,"num_up_osds":3,"osd_up_since":1769853527,"num_in_osds":3,"osd_in_since":1769853497,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":9},{"state_name":"unknown","count":1},{"state_name":"creating+peering","count":1}],"num_pgs":11,"num_pools":11,"num_objects":32,"data_bytes":463572,"bytes_used":84115456,"bytes_avail":64327811072,"bytes_total":64411926528,"unknown_pgs_ratio":0.090909093618392944,"inactive_pgs_ratio":0.090909093618392944,"read_bytes_sec":239,"write_bytes_sec":478,"read_op_per_sec":0,"write_op_per_sec":0},"fsmap":{"epoch":5,"btime":"2026-01-31T09:59:21:127963+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.nykocs","status":"up:active","gid":14251}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-31T09:58:45.043527+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"33627e9d-5fba-4867-86a2-4905f7e2cc96":{"message":"Global Recovery Event (5s)\n      [======================......] (remaining: 1s)","progress":0.80000001192092896,"add_to_ceph_s":true}}}
Jan 31 04:59:28 np0005603787 podman[97399]: 2026-01-31 09:59:28.107751197 +0000 UTC m=+0.114442932 container start 6f40358a3f2b4a60eb0e1e0e0edecbbcced19ab735b7d9648f0f4c4349e37fae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hugle, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 04:59:28 np0005603787 podman[97399]: 2026-01-31 09:59:28.011547984 +0000 UTC m=+0.018239749 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:59:28 np0005603787 suspicious_hugle[97415]: 167 167
Jan 31 04:59:28 np0005603787 systemd[1]: libpod-6f40358a3f2b4a60eb0e1e0e0edecbbcced19ab735b7d9648f0f4c4349e37fae.scope: Deactivated successfully.
Jan 31 04:59:28 np0005603787 podman[97399]: 2026-01-31 09:59:28.112801473 +0000 UTC m=+0.119493238 container attach 6f40358a3f2b4a60eb0e1e0e0edecbbcced19ab735b7d9648f0f4c4349e37fae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 04:59:28 np0005603787 podman[97399]: 2026-01-31 09:59:28.113251424 +0000 UTC m=+0.119943159 container died 6f40358a3f2b4a60eb0e1e0e0edecbbcced19ab735b7d9648f0f4c4349e37fae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hugle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 04:59:28 np0005603787 systemd[1]: libpod-af77305a4701ea685860271b6dc50c136ee8e3c14eee95f95ca7df127e0c88ef.scope: Deactivated successfully.
Jan 31 04:59:28 np0005603787 podman[97231]: 2026-01-31 09:59:28.121415563 +0000 UTC m=+0.686085879 container died af77305a4701ea685860271b6dc50c136ee8e3c14eee95f95ca7df127e0c88ef (image=quay.io/ceph/ceph:v20, name=vibrant_lamarr, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 04:59:28 np0005603787 systemd[1]: var-lib-containers-storage-overlay-1fa30f190a069839dc889c136c307e5cd05a618c33e5d1660a1b856997255054-merged.mount: Deactivated successfully.
Jan 31 04:59:28 np0005603787 systemd[1]: var-lib-containers-storage-overlay-3793381bf9706db3ca36901629b9044336b4d4943ebec6b154a9760469ad7506-merged.mount: Deactivated successfully.
Jan 31 04:59:28 np0005603787 podman[97399]: 2026-01-31 09:59:28.216169309 +0000 UTC m=+0.222861044 container remove 6f40358a3f2b4a60eb0e1e0e0edecbbcced19ab735b7d9648f0f4c4349e37fae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hugle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:59:28 np0005603787 systemd[1]: libpod-conmon-6f40358a3f2b4a60eb0e1e0e0edecbbcced19ab735b7d9648f0f4c4349e37fae.scope: Deactivated successfully.
Jan 31 04:59:28 np0005603787 podman[97231]: 2026-01-31 09:59:28.257575296 +0000 UTC m=+0.822245602 container remove af77305a4701ea685860271b6dc50c136ee8e3c14eee95f95ca7df127e0c88ef (image=quay.io/ceph/ceph:v20, name=vibrant_lamarr, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 04:59:28 np0005603787 systemd[1]: libpod-conmon-af77305a4701ea685860271b6dc50c136ee8e3c14eee95f95ca7df127e0c88ef.scope: Deactivated successfully.
Jan 31 04:59:28 np0005603787 podman[97452]: 2026-01-31 09:59:28.341355508 +0000 UTC m=+0.043746051 container create 339ad96d883a51985fce7c239368bec47ee9ce7b307418d007022c0b456463dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 04:59:28 np0005603787 systemd[1]: Started libpod-conmon-339ad96d883a51985fce7c239368bec47ee9ce7b307418d007022c0b456463dd.scope.
Jan 31 04:59:28 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:28 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d41e279df58ba35d83055583530dedee88139a4b55cfa304025eaa169e168288/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:28 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d41e279df58ba35d83055583530dedee88139a4b55cfa304025eaa169e168288/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:28 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d41e279df58ba35d83055583530dedee88139a4b55cfa304025eaa169e168288/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:28 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d41e279df58ba35d83055583530dedee88139a4b55cfa304025eaa169e168288/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:28 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d41e279df58ba35d83055583530dedee88139a4b55cfa304025eaa169e168288/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:28 np0005603787 podman[97452]: 2026-01-31 09:59:28.319591926 +0000 UTC m=+0.021982469 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:59:28 np0005603787 podman[97452]: 2026-01-31 09:59:28.417695181 +0000 UTC m=+0.120085724 container init 339ad96d883a51985fce7c239368bec47ee9ce7b307418d007022c0b456463dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_swirles, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:59:28 np0005603787 podman[97452]: 2026-01-31 09:59:28.423334172 +0000 UTC m=+0.125724715 container start 339ad96d883a51985fce7c239368bec47ee9ce7b307418d007022c0b456463dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_swirles, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:59:28 np0005603787 podman[97452]: 2026-01-31 09:59:28.429553388 +0000 UTC m=+0.131943961 container attach 339ad96d883a51985fce7c239368bec47ee9ce7b307418d007022c0b456463dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_swirles, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 04:59:28 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:28 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:28 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 04:59:28 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:28 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 04:59:28 np0005603787 serene_swirles[97468]: --> passed data devices: 0 physical, 3 LVM
Jan 31 04:59:28 np0005603787 serene_swirles[97468]: --> All data devices are unavailable
Jan 31 04:59:28 np0005603787 systemd[1]: libpod-339ad96d883a51985fce7c239368bec47ee9ce7b307418d007022c0b456463dd.scope: Deactivated successfully.
Jan 31 04:59:28 np0005603787 conmon[97468]: conmon 339ad96d883a51985fce <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-339ad96d883a51985fce7c239368bec47ee9ce7b307418d007022c0b456463dd.scope/container/memory.events
Jan 31 04:59:28 np0005603787 podman[97452]: 2026-01-31 09:59:28.846434003 +0000 UTC m=+0.548824546 container died 339ad96d883a51985fce7c239368bec47ee9ce7b307418d007022c0b456463dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_swirles, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 04:59:28 np0005603787 systemd[1]: var-lib-containers-storage-overlay-d41e279df58ba35d83055583530dedee88139a4b55cfa304025eaa169e168288-merged.mount: Deactivated successfully.
Jan 31 04:59:28 np0005603787 podman[97452]: 2026-01-31 09:59:28.927710118 +0000 UTC m=+0.630100661 container remove 339ad96d883a51985fce7c239368bec47ee9ce7b307418d007022c0b456463dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_swirles, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:59:28 np0005603787 systemd[1]: libpod-conmon-339ad96d883a51985fce7c239368bec47ee9ce7b307418d007022c0b456463dd.scope: Deactivated successfully.
Jan 31 04:59:29 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v83: 11 pgs: 1 unknown, 10 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 200 B/s rd, 400 B/s wr, 1 op/s
Jan 31 04:59:29 np0005603787 python3[97526]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:59:29 np0005603787 podman[97577]: 2026-01-31 09:59:29.215793697 +0000 UTC m=+0.089442785 container create 5c20e1648025c74104f6fe39b8df833c02ef29bf55dd3cb2b51f71ab572cb26c (image=quay.io/ceph/ceph:v20, name=sleepy_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:59:29 np0005603787 podman[97577]: 2026-01-31 09:59:29.14452316 +0000 UTC m=+0.018172278 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:59:29 np0005603787 systemd[1]: Started libpod-conmon-5c20e1648025c74104f6fe39b8df833c02ef29bf55dd3cb2b51f71ab572cb26c.scope.
Jan 31 04:59:29 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:29 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/700fa81ba78041b1d3956486fd054b22a7d00e13832955afde659c991d893cd9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:29 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/700fa81ba78041b1d3956486fd054b22a7d00e13832955afde659c991d893cd9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:29 np0005603787 podman[97577]: 2026-01-31 09:59:29.3489508 +0000 UTC m=+0.222599888 container init 5c20e1648025c74104f6fe39b8df833c02ef29bf55dd3cb2b51f71ab572cb26c (image=quay.io/ceph/ceph:v20, name=sleepy_dirac, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:59:29 np0005603787 podman[97577]: 2026-01-31 09:59:29.356597404 +0000 UTC m=+0.230246492 container start 5c20e1648025c74104f6fe39b8df833c02ef29bf55dd3cb2b51f71ab572cb26c (image=quay.io/ceph/ceph:v20, name=sleepy_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:59:29 np0005603787 podman[97577]: 2026-01-31 09:59:29.391542699 +0000 UTC m=+0.265191777 container attach 5c20e1648025c74104f6fe39b8df833c02ef29bf55dd3cb2b51f71ab572cb26c (image=quay.io/ceph/ceph:v20, name=sleepy_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 04:59:29 np0005603787 podman[97607]: 2026-01-31 09:59:29.406480279 +0000 UTC m=+0.110977270 container create e76d4007084a5b1749d2a8304732b127813f35cbf6d5348c3da8c80eea3f2540 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:59:29 np0005603787 systemd[1]: Started libpod-conmon-e76d4007084a5b1749d2a8304732b127813f35cbf6d5348c3da8c80eea3f2540.scope.
Jan 31 04:59:29 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:29 np0005603787 podman[97607]: 2026-01-31 09:59:29.35795949 +0000 UTC m=+0.062456501 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:59:29 np0005603787 podman[97607]: 2026-01-31 09:59:29.465937399 +0000 UTC m=+0.170434390 container init e76d4007084a5b1749d2a8304732b127813f35cbf6d5348c3da8c80eea3f2540 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 04:59:29 np0005603787 podman[97607]: 2026-01-31 09:59:29.470305696 +0000 UTC m=+0.174802687 container start e76d4007084a5b1749d2a8304732b127813f35cbf6d5348c3da8c80eea3f2540 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_lamarr, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 04:59:29 np0005603787 strange_lamarr[97625]: 167 167
Jan 31 04:59:29 np0005603787 systemd[1]: libpod-e76d4007084a5b1749d2a8304732b127813f35cbf6d5348c3da8c80eea3f2540.scope: Deactivated successfully.
Jan 31 04:59:29 np0005603787 podman[97607]: 2026-01-31 09:59:29.47377874 +0000 UTC m=+0.178275741 container attach e76d4007084a5b1749d2a8304732b127813f35cbf6d5348c3da8c80eea3f2540 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 04:59:29 np0005603787 podman[97607]: 2026-01-31 09:59:29.4741645 +0000 UTC m=+0.178661491 container died e76d4007084a5b1749d2a8304732b127813f35cbf6d5348c3da8c80eea3f2540 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_lamarr, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:59:29 np0005603787 systemd[1]: var-lib-containers-storage-overlay-acd34224ab49e0011753018e42a46d88d61f63db409f5a38481a0cb487e86cac-merged.mount: Deactivated successfully.
Jan 31 04:59:29 np0005603787 podman[97607]: 2026-01-31 09:59:29.521711832 +0000 UTC m=+0.226208823 container remove e76d4007084a5b1749d2a8304732b127813f35cbf6d5348c3da8c80eea3f2540 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:59:29 np0005603787 systemd[1]: libpod-conmon-e76d4007084a5b1749d2a8304732b127813f35cbf6d5348c3da8c80eea3f2540.scope: Deactivated successfully.
Jan 31 04:59:29 np0005603787 podman[97669]: 2026-01-31 09:59:29.719456163 +0000 UTC m=+0.100942792 container create e331d9cb64ba7a2e3d73c6fc5d990f270338f23eeb48d67f8a9b12546f14cc84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_pare, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 04:59:29 np0005603787 podman[97669]: 2026-01-31 09:59:29.641909038 +0000 UTC m=+0.023395687 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:59:29 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 31 04:59:29 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/879323063' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 04:59:29 np0005603787 sleepy_dirac[97605]: 
Jan 31 04:59:29 np0005603787 systemd[1]: libpod-5c20e1648025c74104f6fe39b8df833c02ef29bf55dd3cb2b51f71ab572cb26c.scope: Deactivated successfully.
Jan 31 04:59:29 np0005603787 conmon[97605]: conmon 5c20e1648025c74104f6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5c20e1648025c74104f6fe39b8df833c02ef29bf55dd3cb2b51f71ab572cb26c.scope/container/memory.events
Jan 31 04:59:29 np0005603787 sleepy_dirac[97605]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.nqlmbk","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Jan 31 04:59:29 np0005603787 podman[97577]: 2026-01-31 09:59:29.842449104 +0000 UTC m=+0.716098192 container died 5c20e1648025c74104f6fe39b8df833c02ef29bf55dd3cb2b51f71ab572cb26c (image=quay.io/ceph/ceph:v20, name=sleepy_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:59:29 np0005603787 systemd[1]: Started libpod-conmon-e331d9cb64ba7a2e3d73c6fc5d990f270338f23eeb48d67f8a9b12546f14cc84.scope.
Jan 31 04:59:29 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:29 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e7f5639847a928eeb54ab80ed2bce85ef2d04d65cec1bb6f658b8a2768b35c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:29 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e7f5639847a928eeb54ab80ed2bce85ef2d04d65cec1bb6f658b8a2768b35c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:29 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e7f5639847a928eeb54ab80ed2bce85ef2d04d65cec1bb6f658b8a2768b35c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:29 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e7f5639847a928eeb54ab80ed2bce85ef2d04d65cec1bb6f658b8a2768b35c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:30 np0005603787 systemd[1]: var-lib-containers-storage-overlay-700fa81ba78041b1d3956486fd054b22a7d00e13832955afde659c991d893cd9-merged.mount: Deactivated successfully.
Jan 31 04:59:30 np0005603787 podman[97577]: 2026-01-31 09:59:30.440502037 +0000 UTC m=+1.314151125 container remove 5c20e1648025c74104f6fe39b8df833c02ef29bf55dd3cb2b51f71ab572cb26c (image=quay.io/ceph/ceph:v20, name=sleepy_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 04:59:30 np0005603787 podman[97669]: 2026-01-31 09:59:30.470733136 +0000 UTC m=+0.852219795 container init e331d9cb64ba7a2e3d73c6fc5d990f270338f23eeb48d67f8a9b12546f14cc84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:59:30 np0005603787 podman[97669]: 2026-01-31 09:59:30.477540648 +0000 UTC m=+0.859027267 container start e331d9cb64ba7a2e3d73c6fc5d990f270338f23eeb48d67f8a9b12546f14cc84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_pare, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 04:59:30 np0005603787 podman[97669]: 2026-01-31 09:59:30.501623112 +0000 UTC m=+0.883109741 container attach e331d9cb64ba7a2e3d73c6fc5d990f270338f23eeb48d67f8a9b12546f14cc84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 04:59:30 np0005603787 systemd[1]: libpod-conmon-5c20e1648025c74104f6fe39b8df833c02ef29bf55dd3cb2b51f71ab572cb26c.scope: Deactivated successfully.
Jan 31 04:59:30 np0005603787 gallant_pare[97698]: {
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:    "0": [
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:        {
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:            "devices": [
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "/dev/loop3"
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:            ],
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:            "lv_name": "ceph_lv0",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:            "lv_size": "21470642176",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:            "name": "ceph_lv0",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:            "tags": {
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "ceph.cluster_name": "ceph",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "ceph.crush_device_class": "",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "ceph.encrypted": "0",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "ceph.objectstore": "bluestore",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "ceph.osd_id": "0",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "ceph.type": "block",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "ceph.vdo": "0",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "ceph.with_tpm": "0"
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:            },
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:            "type": "block",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:            "vg_name": "ceph_vg0"
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:        }
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:    ],
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:    "1": [
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:        {
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:            "devices": [
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "/dev/loop4"
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:            ],
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:            "lv_name": "ceph_lv1",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:            "lv_size": "21470642176",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:            "name": "ceph_lv1",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:            "tags": {
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "ceph.cluster_name": "ceph",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "ceph.crush_device_class": "",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "ceph.encrypted": "0",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "ceph.objectstore": "bluestore",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "ceph.osd_id": "1",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "ceph.type": "block",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "ceph.vdo": "0",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "ceph.with_tpm": "0"
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:            },
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:            "type": "block",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:            "vg_name": "ceph_vg1"
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:        }
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:    ],
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:    "2": [
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:        {
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:            "devices": [
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "/dev/loop5"
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:            ],
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:            "lv_name": "ceph_lv2",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:            "lv_size": "21470642176",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:            "name": "ceph_lv2",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:            "tags": {
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "ceph.cluster_name": "ceph",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "ceph.crush_device_class": "",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "ceph.encrypted": "0",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "ceph.objectstore": "bluestore",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "ceph.osd_id": "2",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "ceph.type": "block",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "ceph.vdo": "0",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:                "ceph.with_tpm": "0"
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:            },
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:            "type": "block",
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:            "vg_name": "ceph_vg2"
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:        }
Jan 31 04:59:30 np0005603787 gallant_pare[97698]:    ]
Jan 31 04:59:30 np0005603787 gallant_pare[97698]: }
Jan 31 04:59:30 np0005603787 systemd[1]: libpod-e331d9cb64ba7a2e3d73c6fc5d990f270338f23eeb48d67f8a9b12546f14cc84.scope: Deactivated successfully.
Jan 31 04:59:30 np0005603787 podman[97669]: 2026-01-31 09:59:30.764825805 +0000 UTC m=+1.146312434 container died e331d9cb64ba7a2e3d73c6fc5d990f270338f23eeb48d67f8a9b12546f14cc84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_pare, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:59:30 np0005603787 systemd[1]: var-lib-containers-storage-overlay-0e7f5639847a928eeb54ab80ed2bce85ef2d04d65cec1bb6f658b8a2768b35c2-merged.mount: Deactivated successfully.
Jan 31 04:59:30 np0005603787 podman[97669]: 2026-01-31 09:59:30.871429627 +0000 UTC m=+1.252916256 container remove e331d9cb64ba7a2e3d73c6fc5d990f270338f23eeb48d67f8a9b12546f14cc84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 04:59:30 np0005603787 systemd[1]: libpod-conmon-e331d9cb64ba7a2e3d73c6fc5d990f270338f23eeb48d67f8a9b12546f14cc84.scope: Deactivated successfully.
Jan 31 04:59:31 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v84: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 11 KiB/s wr, 237 op/s
Jan 31 04:59:31 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 04:59:31 np0005603787 podman[97807]: 2026-01-31 09:59:31.309882669 +0000 UTC m=+0.049868995 container create 71e6f2cdc1462c8830774700cefd3b6887b30eeeaf2c333741640658e693c276 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_lumiere, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 04:59:31 np0005603787 python3[97795]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:59:31 np0005603787 systemd[1]: Started libpod-conmon-71e6f2cdc1462c8830774700cefd3b6887b30eeeaf2c333741640658e693c276.scope.
Jan 31 04:59:31 np0005603787 podman[97807]: 2026-01-31 09:59:31.279953029 +0000 UTC m=+0.019939375 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:59:31 np0005603787 podman[97820]: 2026-01-31 09:59:31.377698604 +0000 UTC m=+0.037000681 container create 2b958706db7f96b3291edd491648dda5549d899ef364c68930a94ae53adfd7eb (image=quay.io/ceph/ceph:v20, name=reverent_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 04:59:31 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:31 np0005603787 systemd[1]: Started libpod-conmon-2b958706db7f96b3291edd491648dda5549d899ef364c68930a94ae53adfd7eb.scope.
Jan 31 04:59:31 np0005603787 podman[97820]: 2026-01-31 09:59:31.363781402 +0000 UTC m=+0.023083499 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:59:31 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:31 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d6b355a1ad14ec40b94d15bab0d50c9f195adb56621fe4c37893a607927cf85/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:31 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d6b355a1ad14ec40b94d15bab0d50c9f195adb56621fe4c37893a607927cf85/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:31 np0005603787 podman[97807]: 2026-01-31 09:59:31.524929494 +0000 UTC m=+0.264915830 container init 71e6f2cdc1462c8830774700cefd3b6887b30eeeaf2c333741640658e693c276 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:59:31 np0005603787 podman[97807]: 2026-01-31 09:59:31.529521656 +0000 UTC m=+0.269507982 container start 71e6f2cdc1462c8830774700cefd3b6887b30eeeaf2c333741640658e693c276 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_lumiere, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 04:59:31 np0005603787 romantic_lumiere[97836]: 167 167
Jan 31 04:59:31 np0005603787 systemd[1]: libpod-71e6f2cdc1462c8830774700cefd3b6887b30eeeaf2c333741640658e693c276.scope: Deactivated successfully.
Jan 31 04:59:31 np0005603787 podman[97820]: 2026-01-31 09:59:31.546651315 +0000 UTC m=+0.205953412 container init 2b958706db7f96b3291edd491648dda5549d899ef364c68930a94ae53adfd7eb (image=quay.io/ceph/ceph:v20, name=reverent_booth, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True)
Jan 31 04:59:31 np0005603787 podman[97820]: 2026-01-31 09:59:31.552776599 +0000 UTC m=+0.212078676 container start 2b958706db7f96b3291edd491648dda5549d899ef364c68930a94ae53adfd7eb (image=quay.io/ceph/ceph:v20, name=reverent_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 04:59:31 np0005603787 podman[97807]: 2026-01-31 09:59:31.587393075 +0000 UTC m=+0.327379421 container attach 71e6f2cdc1462c8830774700cefd3b6887b30eeeaf2c333741640658e693c276 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_lumiere, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:59:31 np0005603787 podman[97807]: 2026-01-31 09:59:31.587803656 +0000 UTC m=+0.327790002 container died 71e6f2cdc1462c8830774700cefd3b6887b30eeeaf2c333741640658e693c276 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_lumiere, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:59:31 np0005603787 systemd[1]: var-lib-containers-storage-overlay-ef2e3ff92637df9f42efe489e60f53c75291d0f6e6348279354883e6710fde14-merged.mount: Deactivated successfully.
Jan 31 04:59:31 np0005603787 podman[97807]: 2026-01-31 09:59:31.707721614 +0000 UTC m=+0.447707940 container remove 71e6f2cdc1462c8830774700cefd3b6887b30eeeaf2c333741640658e693c276 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:59:31 np0005603787 podman[97820]: 2026-01-31 09:59:31.735902399 +0000 UTC m=+0.395204506 container attach 2b958706db7f96b3291edd491648dda5549d899ef364c68930a94ae53adfd7eb (image=quay.io/ceph/ceph:v20, name=reverent_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:59:31 np0005603787 systemd[1]: libpod-conmon-71e6f2cdc1462c8830774700cefd3b6887b30eeeaf2c333741640658e693c276.scope: Deactivated successfully.
Jan 31 04:59:31 np0005603787 podman[97884]: 2026-01-31 09:59:31.826522313 +0000 UTC m=+0.036013784 container create 718cde1984e40d0f8606cb621243fabf29527e5205dee4e13f299ba4800ba381 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:59:31 np0005603787 systemd[1]: Started libpod-conmon-718cde1984e40d0f8606cb621243fabf29527e5205dee4e13f299ba4800ba381.scope.
Jan 31 04:59:31 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:31 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64c114a7a04f1ebb6de5da667312653e835574d1ee46bb6f30532e052d9581d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:31 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64c114a7a04f1ebb6de5da667312653e835574d1ee46bb6f30532e052d9581d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:31 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64c114a7a04f1ebb6de5da667312653e835574d1ee46bb6f30532e052d9581d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:31 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64c114a7a04f1ebb6de5da667312653e835574d1ee46bb6f30532e052d9581d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:31 np0005603787 podman[97884]: 2026-01-31 09:59:31.810920316 +0000 UTC m=+0.020411817 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 04:59:31 np0005603787 podman[97884]: 2026-01-31 09:59:31.917972321 +0000 UTC m=+0.127463812 container init 718cde1984e40d0f8606cb621243fabf29527e5205dee4e13f299ba4800ba381 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_dijkstra, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:59:31 np0005603787 podman[97884]: 2026-01-31 09:59:31.9250367 +0000 UTC m=+0.134528191 container start 718cde1984e40d0f8606cb621243fabf29527e5205dee4e13f299ba4800ba381 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Jan 31 04:59:31 np0005603787 podman[97884]: 2026-01-31 09:59:31.929673194 +0000 UTC m=+0.139164695 container attach 718cde1984e40d0f8606cb621243fabf29527e5205dee4e13f299ba4800ba381 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 04:59:32 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Jan 31 04:59:32 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2579497441' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Jan 31 04:59:32 np0005603787 reverent_booth[97841]: mimic
Jan 31 04:59:32 np0005603787 systemd[1]: libpod-2b958706db7f96b3291edd491648dda5549d899ef364c68930a94ae53adfd7eb.scope: Deactivated successfully.
Jan 31 04:59:32 np0005603787 podman[97820]: 2026-01-31 09:59:32.046702136 +0000 UTC m=+0.706004253 container died 2b958706db7f96b3291edd491648dda5549d899ef364c68930a94ae53adfd7eb (image=quay.io/ceph/ceph:v20, name=reverent_booth, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 04:59:32 np0005603787 systemd[1]: var-lib-containers-storage-overlay-5d6b355a1ad14ec40b94d15bab0d50c9f195adb56621fe4c37893a607927cf85-merged.mount: Deactivated successfully.
Jan 31 04:59:32 np0005603787 podman[97820]: 2026-01-31 09:59:32.098390708 +0000 UTC m=+0.757692785 container remove 2b958706db7f96b3291edd491648dda5549d899ef364c68930a94ae53adfd7eb (image=quay.io/ceph/ceph:v20, name=reverent_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:59:32 np0005603787 systemd[1]: libpod-conmon-2b958706db7f96b3291edd491648dda5549d899ef364c68930a94ae53adfd7eb.scope: Deactivated successfully.
Jan 31 04:59:32 np0005603787 lvm[97993]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 04:59:32 np0005603787 lvm[97993]: VG ceph_vg0 finished
Jan 31 04:59:32 np0005603787 lvm[97996]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 04:59:32 np0005603787 lvm[97996]: VG ceph_vg1 finished
Jan 31 04:59:32 np0005603787 lvm[97998]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 04:59:32 np0005603787 lvm[97998]: VG ceph_vg2 finished
Jan 31 04:59:32 np0005603787 condescending_dijkstra[97901]: {}
Jan 31 04:59:32 np0005603787 systemd[1]: libpod-718cde1984e40d0f8606cb621243fabf29527e5205dee4e13f299ba4800ba381.scope: Deactivated successfully.
Jan 31 04:59:32 np0005603787 systemd[1]: libpod-718cde1984e40d0f8606cb621243fabf29527e5205dee4e13f299ba4800ba381.scope: Consumed 1.000s CPU time.
Jan 31 04:59:32 np0005603787 podman[97884]: 2026-01-31 09:59:32.623505079 +0000 UTC m=+0.832996550 container died 718cde1984e40d0f8606cb621243fabf29527e5205dee4e13f299ba4800ba381 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_dijkstra, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:59:32 np0005603787 systemd[1]: var-lib-containers-storage-overlay-64c114a7a04f1ebb6de5da667312653e835574d1ee46bb6f30532e052d9581d7-merged.mount: Deactivated successfully.
Jan 31 04:59:32 np0005603787 podman[97884]: 2026-01-31 09:59:32.666924431 +0000 UTC m=+0.876415902 container remove 718cde1984e40d0f8606cb621243fabf29527e5205dee4e13f299ba4800ba381 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_dijkstra, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 04:59:32 np0005603787 systemd[1]: libpod-conmon-718cde1984e40d0f8606cb621243fabf29527e5205dee4e13f299ba4800ba381.scope: Deactivated successfully.
Jan 31 04:59:32 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 04:59:32 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:32 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 04:59:32 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:32 np0005603787 python3[98063]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 04:59:32 np0005603787 podman[98064]: 2026-01-31 09:59:32.944111028 +0000 UTC m=+0.034065793 container create 585558a302ddbc3bd55e42f417e189c5d0b5a64f6a9e45dd4a034c93106e709f (image=quay.io/ceph/ceph:v20, name=intelligent_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 04:59:32 np0005603787 systemd[1]: Started libpod-conmon-585558a302ddbc3bd55e42f417e189c5d0b5a64f6a9e45dd4a034c93106e709f.scope.
Jan 31 04:59:32 np0005603787 systemd[1]: Started libcrun container.
Jan 31 04:59:33 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaa0d739e20d3ed0fa965ec56c35d0bac00203bad2fa04ac2286151da226992e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:33 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaa0d739e20d3ed0fa965ec56c35d0bac00203bad2fa04ac2286151da226992e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:59:33 np0005603787 podman[98064]: 2026-01-31 09:59:33.015026556 +0000 UTC m=+0.104981331 container init 585558a302ddbc3bd55e42f417e189c5d0b5a64f6a9e45dd4a034c93106e709f (image=quay.io/ceph/ceph:v20, name=intelligent_sutherland, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 04:59:33 np0005603787 podman[98064]: 2026-01-31 09:59:33.02154484 +0000 UTC m=+0.111499605 container start 585558a302ddbc3bd55e42f417e189c5d0b5a64f6a9e45dd4a034c93106e709f (image=quay.io/ceph/ceph:v20, name=intelligent_sutherland, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 04:59:33 np0005603787 podman[98064]: 2026-01-31 09:59:33.025150806 +0000 UTC m=+0.115105571 container attach 585558a302ddbc3bd55e42f417e189c5d0b5a64f6a9e45dd4a034c93106e709f (image=quay.io/ceph/ceph:v20, name=intelligent_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:59:33 np0005603787 podman[98064]: 2026-01-31 09:59:32.928534211 +0000 UTC m=+0.018488996 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 04:59:33 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v85: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 8.2 KiB/s wr, 178 op/s
Jan 31 04:59:33 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Jan 31 04:59:33 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1513832217' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Jan 31 04:59:33 np0005603787 intelligent_sutherland[98079]: 
Jan 31 04:59:33 np0005603787 intelligent_sutherland[98079]: {"mon":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"mgr":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"osd":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":3},"mds":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"rgw":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"overall":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":7}}
Jan 31 04:59:33 np0005603787 systemd[1]: libpod-585558a302ddbc3bd55e42f417e189c5d0b5a64f6a9e45dd4a034c93106e709f.scope: Deactivated successfully.
Jan 31 04:59:33 np0005603787 podman[98064]: 2026-01-31 09:59:33.557877972 +0000 UTC m=+0.647832747 container died 585558a302ddbc3bd55e42f417e189c5d0b5a64f6a9e45dd4a034c93106e709f (image=quay.io/ceph/ceph:v20, name=intelligent_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:59:33 np0005603787 systemd[1]: var-lib-containers-storage-overlay-eaa0d739e20d3ed0fa965ec56c35d0bac00203bad2fa04ac2286151da226992e-merged.mount: Deactivated successfully.
Jan 31 04:59:33 np0005603787 podman[98064]: 2026-01-31 09:59:33.711970105 +0000 UTC m=+0.801924870 container remove 585558a302ddbc3bd55e42f417e189c5d0b5a64f6a9e45dd4a034c93106e709f (image=quay.io/ceph/ceph:v20, name=intelligent_sutherland, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Jan 31 04:59:33 np0005603787 systemd[1]: libpod-conmon-585558a302ddbc3bd55e42f417e189c5d0b5a64f6a9e45dd4a034c93106e709f.scope: Deactivated successfully.
Jan 31 04:59:33 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:33 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:33 np0005603787 ceph-mgr[75453]: [progress INFO root] Completed event 33627e9d-5fba-4867-86a2-4905f7e2cc96 (Global Recovery Event) in 15 seconds
Jan 31 04:59:35 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v86: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 7.1 KiB/s wr, 158 op/s
Jan 31 04:59:36 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 04:59:37 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v87: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 6.4 KiB/s wr, 141 op/s
Jan 31 04:59:38 np0005603787 ceph-mgr[75453]: [progress INFO root] Writing back 6 completed events
Jan 31 04:59:38 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 31 04:59:39 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v88: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.3 KiB/s wr, 118 op/s
Jan 31 04:59:39 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:40 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:41 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v89: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.3 KiB/s wr, 118 op/s
Jan 31 04:59:41 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_09:59:43
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] do_upmap
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'images', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'vms', 'default.rgw.control']
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v90: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.481904035994121e-07 of space, bias 4.0, pg target 0.0006578284843192945 quantized to 16 (current 1)
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 1)
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:59:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 04:59:43 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:59:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:59:44 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Jan 31 04:59:44 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Jan 31 04:59:44 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 31 04:59:44 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Jan 31 04:59:44 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Jan 31 04:59:44 np0005603787 ceph-mgr[75453]: [progress INFO root] update: starting ev 796f0305-023a-47c8-9af3-7c5c2fee01b6 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 31 04:59:44 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Jan 31 04:59:44 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Jan 31 04:59:45 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v92: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 04:59:45 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 04:59:45 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 04:59:45 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Jan 31 04:59:45 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 31 04:59:45 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 04:59:45 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Jan 31 04:59:45 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 31 04:59:45 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Jan 31 04:59:45 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 04:59:45 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 41 pg[2.0( empty local-lis/les=17/18 n=0 ec=16/16 lis/c=17/17 les/c/f=18/18/0 sis=41 pruub=15.434219360s) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active pruub 83.614967346s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 04:59:45 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Jan 31 04:59:45 np0005603787 ceph-mgr[75453]: [progress INFO root] update: starting ev 7dcde4ea-0fda-48c5-9506-9ea587a41eaf (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 31 04:59:45 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 41 pg[2.0( empty local-lis/les=17/18 n=0 ec=16/16 lis/c=17/17 les/c/f=18/18/0 sis=41 pruub=15.434219360s) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown pruub 83.614967346s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:45 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Jan 31 04:59:45 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Jan 31 04:59:46 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 04:59:46 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Jan 31 04:59:46 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 31 04:59:46 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Jan 31 04:59:46 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.1e( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.1( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.c( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.e( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.10( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.12( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.14( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:46 np0005603787 ceph-mgr[75453]: [progress INFO root] update: starting ev 7030edfa-ba9d-4fcf-aee8-07da17d0183d (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 31 04:59:46 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 31 04:59:46 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 04:59:46 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.1a( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:46 np0005603787 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:46 np0005603787 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.1e( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.1( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.0( empty local-lis/les=41/42 n=0 ec=16/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.c( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.e( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.10( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.12( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.14( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.1a( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:46 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:46 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Jan 31 04:59:46 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Jan 31 04:59:47 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v95: 42 pgs: 31 unknown, 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 04:59:47 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 04:59:47 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 04:59:47 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 04:59:47 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 04:59:47 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Jan 31 04:59:47 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 31 04:59:47 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Jan 31 04:59:47 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 04:59:47 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 04:59:47 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 31 04:59:47 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 04:59:47 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 04:59:47 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Jan 31 04:59:47 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 43 pg[4.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=43 pruub=14.377059937s) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active pruub 97.894874573s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 04:59:47 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Jan 31 04:59:47 np0005603787 ceph-mgr[75453]: [progress INFO root] update: starting ev 2b060d56-1d2a-41ed-8dc8-76a5a1beb483 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 31 04:59:47 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0)
Jan 31 04:59:47 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Jan 31 04:59:47 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 43 pg[4.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=43 pruub=14.377059937s) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown pruub 97.894874573s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:48 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Jan 31 04:59:48 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Jan 31 04:59:48 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Jan 31 04:59:48 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 31 04:59:48 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 04:59:48 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 04:59:48 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Jan 31 04:59:48 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 31 04:59:48 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Jan 31 04:59:48 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 43 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=43 pruub=11.947142601s) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active pruub 90.691833496s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 04:59:48 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 43 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=43 pruub=11.947142601s) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown pruub 90.691833496s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:48 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Jan 31 04:59:48 np0005603787 ceph-mgr[75453]: [progress INFO root] update: starting ev 2556962d-c066-4133-86f7-eaae698ce26b (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 31 04:59:48 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Jan 31 04:59:48 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.1e( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.1f( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.1d( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.7( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.b( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.6( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.4( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.3( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.19( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.c( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.f( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.10( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.11( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.12( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.15( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.16( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.17( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.1e( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.1f( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.7( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.1d( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.b( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.6( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.3( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.4( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.19( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.0( empty local-lis/les=43/44 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.f( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.c( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.12( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.10( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.11( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.15( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.16( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.17( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:49 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v98: 104 pgs: 1 peering, 62 unknown, 41 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 04:59:49 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0)
Jan 31 04:59:49 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Jan 31 04:59:49 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 04:59:49 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 04:59:49 np0005603787 ceph-mgr[75453]: [progress WARNING root] Starting Global Recovery Event,63 pgs not in active + clean state
Jan 31 04:59:49 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Jan 31 04:59:49 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 31 04:59:49 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 31 04:59:49 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 04:59:49 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Jan 31 04:59:49 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Jan 31 04:59:49 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 45 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=45 pruub=12.982988358s) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active pruub 85.646347046s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.1e( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.1f( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.1c( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.1a( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.19( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.18( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.7( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.6( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.1( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.b( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.4( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.2( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.d( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.10( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.12( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.14( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.17( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.13( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:49 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 45 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=45 pruub=12.982988358s) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown pruub 85.646347046s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:49 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 31 04:59:49 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Jan 31 04:59:49 np0005603787 ceph-mgr[75453]: [progress INFO root] update: starting ev f028c9b6-e1cc-481b-a66f-f63d81b1c47d (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 31 04:59:49 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Jan 31 04:59:49 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.1c( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.1f( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.19( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.18( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.7( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.6( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.1e( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.0( empty local-lis/les=43/45 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.1( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.4( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.1a( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.d( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.b( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.12( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.10( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.14( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.13( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.17( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 45 pg[3.2( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:49 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Jan 31 04:59:49 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Jan 31 04:59:50 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 45 pg[6.0( v 34'39 (0'0,34'39] local-lis/les=20/21 n=22 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=45 pruub=13.732336044s) [0] r=0 lpr=45 pi=[20,45)/1 crt=34'39 lcod 32'38 mlcod 32'38 active pruub 100.261917114s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 04:59:50 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 45 pg[6.0( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=20/21 n=1 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=45 pruub=13.732336044s) [0] r=0 lpr=45 pi=[20,45)/1 crt=34'39 lcod 32'38 mlcod 0'0 unknown pruub 100.261917114s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Jan 31 04:59:50 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 31 04:59:50 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Jan 31 04:59:50 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Jan 31 04:59:50 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 46 pg[6.a( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=20/21 n=1 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=34'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 46 pg[6.4( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=20/21 n=2 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=34'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 46 pg[6.9( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=20/21 n=1 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=34'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 46 pg[6.5( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=20/21 n=2 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=34'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 46 pg[6.8( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=20/21 n=1 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=34'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 46 pg[6.7( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=20/21 n=1 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=34'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 46 pg[6.b( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=20/21 n=1 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=34'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 46 pg[6.6( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=20/21 n=2 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=34'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 46 pg[6.1( v 34'39 (0'0,34'39] local-lis/les=20/21 n=2 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=34'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 46 pg[6.3( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=20/21 n=2 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=34'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 46 pg[6.2( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=20/21 n=2 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=34'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 46 pg[6.e( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=20/21 n=1 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=34'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 46 pg[6.f( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=20/21 n=1 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=34'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 46 pg[6.c( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=20/21 n=1 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=34'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 46 pg[6.d( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=20/21 n=1 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=34'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-mgr[75453]: [progress INFO root] update: starting ev 89556425-9db5-40d9-8623-d61a2158ea5b (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 31 04:59:50 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Jan 31 04:59:50 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Jan 31 04:59:50 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 46 pg[6.4( v 34'39 (0'0,34'39] local-lis/les=45/46 n=2 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.1e( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.1d( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 31 04:59:50 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 31 04:59:50 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 04:59:50 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.1f( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.10( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.11( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.12( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.13( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.14( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.15( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.16( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.17( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.8( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.9( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.a( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.b( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.c( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.7( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.f( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.6( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.5( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.4( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.3( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.2( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.1( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.e( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.1c( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.d( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.1b( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.1a( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.18( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.19( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:50 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 46 pg[6.7( v 34'39 (0'0,34'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 46 pg[6.5( v 34'39 (0'0,34'39] local-lis/les=45/46 n=2 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 46 pg[6.a( v 34'39 (0'0,34'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 46 pg[6.8( v 34'39 (0'0,34'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 46 pg[6.9( v 34'39 (0'0,34'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 46 pg[6.b( v 34'39 (0'0,34'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 46 pg[6.0( v 34'39 (0'0,34'39] local-lis/les=45/46 n=1 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=34'39 lcod 32'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 46 pg[6.1( v 34'39 (0'0,34'39] local-lis/les=45/46 n=2 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 46 pg[6.6( v 34'39 (0'0,34'39] local-lis/les=45/46 n=2 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 46 pg[6.2( v 34'39 (0'0,34'39] local-lis/les=45/46 n=2 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 46 pg[6.f( v 34'39 (0'0,34'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 46 pg[6.c( v 34'39 (0'0,34'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 46 pg[6.e( v 34'39 (0'0,34'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 46 pg[6.d( v 34'39 (0'0,34'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 46 pg[6.3( v 34'39 (0'0,34'39] local-lis/les=45/46 n=2 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.1e( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.1f( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.11( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.13( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.10( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.15( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.14( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.16( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.17( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.1d( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.a( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.9( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.8( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.b( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.c( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.7( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.0( empty local-lis/les=45/46 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.f( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.6( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.3( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.12( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.5( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.2( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.1( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.e( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.4( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.1b( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.1c( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.18( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.d( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.1a( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 46 pg[5.19( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [2] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:51 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v101: 150 pgs: 1 peering, 77 unknown, 72 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 04:59:51 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 04:59:51 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 04:59:51 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 04:59:51 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 04:59:51 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 04:59:51 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Jan 31 04:59:51 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 31 04:59:51 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 04:59:51 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 04:59:51 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Jan 31 04:59:51 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Jan 31 04:59:51 np0005603787 ceph-mgr[75453]: [progress INFO root] update: starting ev 846ae2f2-3c2f-4d5a-8d36-e900917445b6 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 31 04:59:51 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Jan 31 04:59:51 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Jan 31 04:59:51 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 47 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=47 pruub=14.403697968s) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active pruub 96.181282043s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 04:59:51 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 47 pg[8.0( v 32'6 (0'0,32'6] local-lis/les=31/32 n=6 ec=31/31 lis/c=31/31 les/c/f=32/32/0 sis=47 pruub=15.760576248s) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 32'5 mlcod 32'5 active pruub 97.538230896s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 04:59:51 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 31 04:59:51 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Jan 31 04:59:51 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 04:59:51 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 04:59:51 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 47 pg[8.0( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=31/31 lis/c=31/31 les/c/f=32/32/0 sis=47 pruub=15.760576248s) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 32'5 mlcod 0'0 unknown pruub 97.538230896s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:51 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 47 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=47 pruub=14.403697968s) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown pruub 96.181282043s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Jan 31 04:59:52 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Jan 31 04:59:52 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Jan 31 04:59:52 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 31 04:59:52 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Jan 31 04:59:52 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Jan 31 04:59:52 np0005603787 ceph-mgr[75453]: [progress INFO root] update: starting ev 37add59c-4e51-4d4e-9196-4a9254c46aae (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 31 04:59:52 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Jan 31 04:59:52 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.1c( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.1d( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.12( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.1e( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.13( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.10( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.11( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.1f( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.17( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.19( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.18( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.16( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.1a( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.15( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.1b( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.14( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.4( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=1 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.b( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.a( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.5( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=1 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.9( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.8( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.2( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=1 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.7( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.6( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=1 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.d( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.9( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.6( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.b( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.4( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.f( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.f( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.1( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=1 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.e( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.c( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.3( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=1 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.5( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.a( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.8( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.7( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.d( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.e( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.1( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.2( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.c( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.1c( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.13( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.3( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.12( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.1d( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.11( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.1e( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.10( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.1f( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.17( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.16( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.18( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.19( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.15( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.1a( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.14( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.1b( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.1c( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.1d( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.1e( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.12( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.10( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.19( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.11( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.17( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.1f( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.13( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.15( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.16( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.4( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.18( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.1b( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.b( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.a( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.5( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.14( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.9( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.7( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.8( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.2( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.d( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.9( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.4( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.f( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.6( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.0( empty local-lis/les=47/48 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.0( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=31/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 32'5 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.f( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.b( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.1( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.6( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.e( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.c( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.3( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.1a( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.8( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:52 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.7( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:53 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 31 04:59:53 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 04:59:53 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 04:59:53 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Jan 31 04:59:53 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 31 04:59:53 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Jan 31 04:59:53 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.e( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:53 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.d( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:53 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.1( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:53 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.2( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:53 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.c( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:53 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.5( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:53 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.a( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:53 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.13( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:53 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.12( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:53 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.1d( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:53 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.10( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:53 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.17( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:53 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.3( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:53 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.1f( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:53 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.1e( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:53 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.1c( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:53 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.18( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:53 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.11( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:53 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.19( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:53 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.16( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:53 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.1a( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:53 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[7.1b( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [1] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:53 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.15( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:53 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 48 pg[8.14( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:53 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v104: 212 pgs: 93 unknown, 119 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 04:59:53 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 04:59:53 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 04:59:53 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 04:59:53 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 04:59:53 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Jan 31 04:59:53 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 31 04:59:53 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 04:59:53 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 04:59:53 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Jan 31 04:59:53 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Jan 31 04:59:53 np0005603787 ceph-mgr[75453]: [progress INFO root] update: starting ev 788fad79-1d79-4853-a7cd-143ccae3b442 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 31 04:59:53 np0005603787 ceph-mgr[75453]: [progress INFO root] complete: finished ev 796f0305-023a-47c8-9af3-7c5c2fee01b6 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 31 04:59:53 np0005603787 ceph-mgr[75453]: [progress INFO root] Completed event 796f0305-023a-47c8-9af3-7c5c2fee01b6 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 10 seconds
Jan 31 04:59:53 np0005603787 ceph-mgr[75453]: [progress INFO root] complete: finished ev 7dcde4ea-0fda-48c5-9506-9ea587a41eaf (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 31 04:59:53 np0005603787 ceph-mgr[75453]: [progress INFO root] Completed event 7dcde4ea-0fda-48c5-9506-9ea587a41eaf (PG autoscaler increasing pool 3 PGs from 1 to 32) in 9 seconds
Jan 31 04:59:53 np0005603787 ceph-mgr[75453]: [progress INFO root] complete: finished ev 7030edfa-ba9d-4fcf-aee8-07da17d0183d (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 31 04:59:53 np0005603787 ceph-mgr[75453]: [progress INFO root] Completed event 7030edfa-ba9d-4fcf-aee8-07da17d0183d (PG autoscaler increasing pool 4 PGs from 1 to 32) in 8 seconds
Jan 31 04:59:53 np0005603787 ceph-mgr[75453]: [progress INFO root] complete: finished ev 2b060d56-1d2a-41ed-8dc8-76a5a1beb483 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 31 04:59:53 np0005603787 ceph-mgr[75453]: [progress INFO root] Completed event 2b060d56-1d2a-41ed-8dc8-76a5a1beb483 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 6 seconds
Jan 31 04:59:53 np0005603787 ceph-mgr[75453]: [progress INFO root] complete: finished ev 2556962d-c066-4133-86f7-eaae698ce26b (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 31 04:59:53 np0005603787 ceph-mgr[75453]: [progress INFO root] Completed event 2556962d-c066-4133-86f7-eaae698ce26b (PG autoscaler increasing pool 6 PGs from 1 to 16) in 5 seconds
Jan 31 04:59:53 np0005603787 ceph-mgr[75453]: [progress INFO root] complete: finished ev f028c9b6-e1cc-481b-a66f-f63d81b1c47d (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 31 04:59:53 np0005603787 ceph-mgr[75453]: [progress INFO root] Completed event f028c9b6-e1cc-481b-a66f-f63d81b1c47d (PG autoscaler increasing pool 7 PGs from 1 to 32) in 4 seconds
Jan 31 04:59:53 np0005603787 ceph-mgr[75453]: [progress INFO root] complete: finished ev 89556425-9db5-40d9-8623-d61a2158ea5b (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 31 04:59:53 np0005603787 ceph-mgr[75453]: [progress INFO root] Completed event 89556425-9db5-40d9-8623-d61a2158ea5b (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Jan 31 04:59:53 np0005603787 ceph-mgr[75453]: [progress INFO root] complete: finished ev 846ae2f2-3c2f-4d5a-8d36-e900917445b6 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 31 04:59:53 np0005603787 ceph-mgr[75453]: [progress INFO root] Completed event 846ae2f2-3c2f-4d5a-8d36-e900917445b6 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Jan 31 04:59:53 np0005603787 ceph-mgr[75453]: [progress INFO root] complete: finished ev 37add59c-4e51-4d4e-9196-4a9254c46aae (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 31 04:59:53 np0005603787 ceph-mgr[75453]: [progress INFO root] Completed event 37add59c-4e51-4d4e-9196-4a9254c46aae (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Jan 31 04:59:53 np0005603787 ceph-mgr[75453]: [progress INFO root] complete: finished ev 788fad79-1d79-4853-a7cd-143ccae3b442 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 31 04:59:53 np0005603787 ceph-mgr[75453]: [progress INFO root] Completed event 788fad79-1d79-4853-a7cd-143ccae3b442 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Jan 31 04:59:53 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 49 pg[9.0( v 39'483 (0'0,39'483] local-lis/les=33/34 n=210 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=49 pruub=15.800107956s) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 39'482 mlcod 39'482 active pruub 99.583503723s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 49 pg[9.0( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=49 pruub=15.800107956s) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 39'482 mlcod 0'0 unknown pruub 99.583503723s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1dff880 space 0x558db1048240 0x0~9a clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1d64b00 space 0x558db14cfd40 0x0~9a clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1d8fd00 space 0x558db14ecb40 0x0~98 clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1d63780 space 0x558db1044e40 0x0~9a clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1d75a80 space 0x558db14ceb40 0x0~9a clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1d65680 space 0x558db1045d40 0x0~9a clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1d9d980 space 0x558db15df740 0x0~6e clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1d99a80 space 0x558db15d5740 0x0~6e clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1dffe00 space 0x558db15f4240 0x0~6e clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1dffa80 space 0x558db1389a40 0x0~6e clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1d99880 space 0x558db15f5440 0x0~6e clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1d98c80 space 0x558db1542b40 0x0~6e clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1d64300 space 0x558db15c3740 0x0~9a clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1db1b00 space 0x558db1554e40 0x0~6e clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1da5a00 space 0x558db15f5d40 0x0~6e clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1d8fc80 space 0x558db14dd440 0x0~98 clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1d98e00 space 0x558db14cc240 0x0~9a clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1d75c80 space 0x558db15ec540 0x0~6e clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1d74080 space 0x558db14b1140 0x0~6e clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1d99000 space 0x558db1554540 0x0~6e clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1da5d00 space 0x558db20f0e40 0x0~6e clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1da5780 space 0x558db14c6840 0x0~6e clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1d98b00 space 0x558db1542240 0x0~6e clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1da5700 space 0x558db14b1a40 0x0~6e clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1d65380 space 0x558db14dcb40 0x0~98 clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1d99e80 space 0x558db1564840 0x0~6e clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1da5600 space 0x558db1e61140 0x0~9a clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1d9be00 space 0x558db2610840 0x0~98 clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1da4b80 space 0x558db1e60540 0x0~9a clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1d9cf00 space 0x558db160e540 0x0~6e clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1d50400 space 0x558db14f1d40 0x0~9a clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1da5c00 space 0x558db14c7140 0x0~6e clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1db1a00 space 0x558db14f0840 0x0~9a clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1d8f800 space 0x558db14ddd40 0x0~9a clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1d9d600 space 0x558db1565a40 0x0~6e clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1d9c480 space 0x558db15ece40 0x0~98 clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1dffa00 space 0x558db15f4b40 0x0~6e clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1da4680 space 0x558db160fa40 0x0~98 clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1da5800 space 0x558db1044240 0x0~9a clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1d8fd80 space 0x558db1048e40 0x0~9a clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1d99c80 space 0x558db15d4e40 0x0~6e clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db080be00 space 0x558db14b0840 0x0~6e clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1db9c80 space 0x558db1052840 0x0~9a clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1d98980 space 0x558db1585740 0x0~6e clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1d9c600 space 0x558db14cf440 0x0~9a clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1d65400 space 0x558db0ffba40 0x0~9a clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1d65b80 space 0x558db14edd40 0x0~98 clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1d98500 space 0x558db15dee40 0x0~6e clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1d9da00 space 0x558db14cc840 0x0~9a clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1db1d00 space 0x558db1555740 0x0~6e clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1d99e00 space 0x558db15ed140 0x0~9a clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1d9df80 space 0x558db20f0540 0x0~6e clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1d9a100 space 0x558db1024e40 0x0~9a clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1d99100 space 0x558db15d4540 0x0~6e clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1d99480 space 0x558db102e840 0x0~9a clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1d9c700 space 0x558db20f1740 0x0~6e clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1d63d00 space 0x558db1543440 0x0~6e clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1d64680 space 0x558db2509740 0x0~98 clean)
Jan 31 04:59:54 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558db1e31680) split_cache   moving buffer(0x558db1dfff80 space 0x558db14c7a40 0x0~6e clean)
Jan 31 04:59:54 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 04:59:54 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 04:59:54 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 31 04:59:54 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 04:59:54 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 04:59:54 np0005603787 ceph-mgr[75453]: [progress INFO root] Writing back 16 completed events
Jan 31 04:59:54 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 31 04:59:54 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:54 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Jan 31 04:59:54 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Jan 31 04:59:54 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Jan 31 04:59:55 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v106: 274 pgs: 124 unknown, 150 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 04:59:55 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 04:59:55 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 04:59:55 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Jan 31 04:59:55 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 49 pg[10.0( v 39'18 (0'0,39'18] local-lis/les=35/36 n=9 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=49 pruub=8.849641800s) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 39'17 mlcod 39'17 active pruub 86.673858643s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 04:59:55 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 49 pg[10.0( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=49 pruub=8.849641800s) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 39'17 mlcod 0'0 unknown pruub 86.673858643s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:55 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Jan 31 04:59:55 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Jan 31 04:59:55 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.15( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.14( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.17( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.16( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.11( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.10( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.13( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.12( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.d( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.c( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.f( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.9( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.b( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.2( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.1( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.e( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.8( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.a( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.3( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.4( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.7( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.6( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.5( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.1a( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.1b( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.18( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.19( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.1f( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.1e( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.1c( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.1d( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:55 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 04:59:55 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.14( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.10( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.12( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.0( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 39'482 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.2( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.a( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.4( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.5( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.1a( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.18( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.1c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:55 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 50 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:56 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Jan 31 04:59:56 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 04:59:56 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Jan 31 04:59:56 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.12( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.10( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.1e( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.1d( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.1c( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:56 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.1b( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.1f( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.1a( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.19( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.11( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.18( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.7( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.6( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.5( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.4( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.3( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.8( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.f( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.9( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.a( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.b( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.c( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.d( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.1( v 39'18 (0'0,39'18] local-lis/les=35/36 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.e( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.2( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.13( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.14( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.15( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.16( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.17( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.10( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.12( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:56 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.1c( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.1d( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.1e( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.1a( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.19( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.7( v 39'18 (0'0,39'18] local-lis/les=49/51 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.18( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.11( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.6( v 39'18 (0'0,39'18] local-lis/les=49/51 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.4( v 39'18 (0'0,39'18] local-lis/les=49/51 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.5( v 39'18 (0'0,39'18] local-lis/les=49/51 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.3( v 39'18 (0'0,39'18] local-lis/les=49/51 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.8( v 39'18 (0'0,39'18] local-lis/les=49/51 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.0( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 39'17 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.9( v 39'18 (0'0,39'18] local-lis/les=49/51 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.f( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.a( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.c( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.1f( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.b( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.1( v 39'18 (0'0,39'18] local-lis/les=49/51 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.e( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.d( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.13( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.15( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.14( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.2( v 39'18 (0'0,39'18] local-lis/les=49/51 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.17( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.16( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:56 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 51 pg[10.1b( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:56 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Jan 31 04:59:56 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Jan 31 04:59:56 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 51 pg[11.0( empty local-lis/les=37/38 n=0 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=51 pruub=9.337196350s) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active pruub 95.934837341s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 04:59:56 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 51 pg[11.0( empty local-lis/les=37/38 n=0 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=51 pruub=9.337196350s) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown pruub 95.934837341s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:57 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v109: 305 pgs: 62 unknown, 243 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 04:59:57 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Jan 31 04:59:57 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.17( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.16( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.15( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.14( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.13( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.12( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.11( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.10( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.e( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.d( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.f( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.b( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.9( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.2( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.3( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.c( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.a( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.8( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.1( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.4( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.5( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.6( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.7( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.18( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.19( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.1b( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.1a( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.1c( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.1d( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.1e( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.1f( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 04:59:57 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.16( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.17( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.15( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.14( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.12( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.11( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.13( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.10( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.d( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.e( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.f( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.b( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.9( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.0( empty local-lis/les=51/52 n=0 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.2( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.3( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.c( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.a( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.8( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.4( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.5( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.7( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.6( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.1( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.18( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.19( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.1b( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.1a( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.1c( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.1d( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.1e( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 52 pg[11.1f( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 04:59:58 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Jan 31 04:59:58 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Jan 31 04:59:59 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Jan 31 04:59:59 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Jan 31 04:59:59 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v111: 305 pgs: 31 unknown, 274 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:00:00 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Jan 31 05:00:00 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Jan 31 05:00:01 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v112: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0)
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Jan 31 05:00:01 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[4.8( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.581492424s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 108.975326538s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[4.7( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.561657906s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 108.955528259s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[4.8( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.581460953s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 108.975326538s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[6.5( v 34'39 (0'0,34'39] local-lis/les=45/46 n=2 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.564957619s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=34'39 lcod 0'0 active pruub 110.958839417s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[4.7( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.561635017s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 108.955528259s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[4.1c( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.561577797s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 108.955490112s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[6.5( v 34'39 (0'0,34'39] local-lis/les=45/46 n=2 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.564910889s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=34'39 lcod 0'0 unknown NOTIFY pruub 110.958839417s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[4.1c( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.561555862s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 108.955490112s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[6.9( v 34'39 (0'0,34'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.564797401s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=34'39 lcod 0'0 active pruub 110.958786011s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[6.9( v 34'39 (0'0,34'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.564785004s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=34'39 lcod 0'0 unknown NOTIFY pruub 110.958786011s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[4.1b( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.580788612s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 108.974899292s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[4.a( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.580769539s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 108.974891663s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[6.7( v 34'39 (0'0,34'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.564645767s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=34'39 lcod 0'0 active pruub 110.958793640s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[4.1b( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.580773354s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 108.974899292s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[4.a( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.580754280s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 108.974891663s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[6.7( v 34'39 (0'0,34'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.564632416s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=34'39 lcod 0'0 unknown NOTIFY pruub 110.958793640s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[4.5( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.580642700s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 108.974891663s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[4.5( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.580623627s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 108.974891663s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[6.b( v 34'39 (0'0,34'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.564702034s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=34'39 lcod 0'0 active pruub 110.958984375s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[4.9( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.580742836s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 108.975028992s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[6.b( v 34'39 (0'0,34'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.564682961s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=34'39 lcod 0'0 unknown NOTIFY pruub 110.958984375s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[4.9( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.580718994s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 108.975028992s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[4.4( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.580669403s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 108.975044250s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[6.1( v 34'39 (0'0,34'39] local-lis/les=45/46 n=2 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.564609528s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=34'39 lcod 0'0 active pruub 110.959022522s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[4.4( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.580652237s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 108.975044250s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[6.1( v 34'39 (0'0,34'39] local-lis/les=45/46 n=2 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.564593315s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=34'39 lcod 0'0 unknown NOTIFY pruub 110.959022522s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[4.1( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.580620766s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 108.975090027s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[6.3( v 34'39 (0'0,34'39] local-lis/les=45/46 n=2 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.564648628s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=34'39 lcod 0'0 active pruub 110.959129333s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[4.1( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.580602646s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 108.975090027s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[6.3( v 34'39 (0'0,34'39] local-lis/les=45/46 n=2 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.564628601s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=34'39 lcod 0'0 unknown NOTIFY pruub 110.959129333s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[4.2( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.580522537s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 108.975059509s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[4.2( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.580511093s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 108.975059509s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[4.d( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.580430984s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 108.975097656s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[4.e( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.580467224s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 108.975143433s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[4.e( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.580451965s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 108.975143433s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[6.d( v 34'39 (0'0,34'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.564454079s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=34'39 lcod 0'0 active pruub 110.959159851s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[4.d( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.580415726s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 108.975097656s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[6.d( v 34'39 (0'0,34'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.564435005s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=34'39 lcod 0'0 unknown NOTIFY pruub 110.959159851s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[6.f( v 34'39 (0'0,34'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.564352036s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=34'39 lcod 0'0 active pruub 110.959144592s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[4.10( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.580266953s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 108.975097656s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[4.11( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.580281258s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 108.975128174s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[4.10( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.580254555s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 108.975097656s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[4.11( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.580262184s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 108.975128174s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[6.f( v 34'39 (0'0,34'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.564332008s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=34'39 lcod 0'0 unknown NOTIFY pruub 110.959144592s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[4.f( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.580173492s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 108.975090027s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[4.f( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.580153465s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 108.975090027s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[4.12( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.580174446s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 108.975135803s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[4.12( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.580161095s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 108.975135803s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[4.14( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.580236435s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 108.975318909s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[4.1a( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.579978943s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 108.975036621s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[4.14( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.580220222s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 108.975318909s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[4.1a( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.579895973s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 108.975036621s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[4.18( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.580006599s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 108.975318909s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[4.18( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.579984665s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 108.975318909s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[4.13( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.579795837s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 108.975143433s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[4.13( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.579771996s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 108.975143433s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[4.18( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[4.10( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[4.1b( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[4.1a( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[4.12( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[4.e( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[4.14( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[4.8( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[4.1( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[4.9( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[4.a( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[6.9( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[4.13( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[6.7( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[4.11( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[4.5( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[4.1c( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[6.5( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[5.1d( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.559847832s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 97.693969727s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.11( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.915813446s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 95.049972534s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[5.1d( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.559813499s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 97.693969727s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[5.1e( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.552837372s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 97.687011719s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.11( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.915791512s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 95.049972534s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[5.1e( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.552815437s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 97.687011719s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.19( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.072843552s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 93.207214355s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.10( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.895288467s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 95.029701233s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.19( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.072787285s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 93.207214355s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.10( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.895259857s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 95.029701233s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.18( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.072695732s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 93.207260132s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.18( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.072654724s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 93.207260132s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.17( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.072562218s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 93.207183838s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.17( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.072540283s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 93.207183838s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.1e( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.915157318s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 95.049835205s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.1e( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.915138245s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 95.049835205s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.16( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.072363853s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 93.207122803s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[5.11( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.558919907s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 97.693687439s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.16( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.072340965s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 93.207122803s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[5.11( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.558899879s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 97.693687439s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.12( v 51'19 (0'0,51'19] local-lis/les=49/51 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.894783020s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 active pruub 95.029724121s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[5.12( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.559594154s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 97.694541931s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[5.12( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.559578896s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 97.694541931s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.12( v 51'19 (0'0,51'19] local-lis/les=49/51 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.894737244s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY pruub 95.029724121s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[5.14( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.558934212s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 97.694015503s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.15( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.072022438s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 93.207115173s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.13( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.071939468s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 93.207046509s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[4.7( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[5.14( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.558919907s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 97.694015503s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[6.1( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[4.d( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[4.f( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[4.4( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.13( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.071919441s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 93.207046509s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.15( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.071981430s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 93.207115173s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[5.15( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.558546066s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 97.693916321s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[5.15( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.558531761s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 97.693916321s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.11( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.071665764s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 93.207099915s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.1a( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.914422989s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 95.049865723s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.19( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.914458275s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 95.049919128s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.11( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.071643829s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 93.207099915s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[4.2( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[5.13( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.558294296s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 97.693763733s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.19( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.914436340s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 95.049919128s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.1a( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.914393425s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 95.049865723s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[5.16( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.558333397s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 97.693946838s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[6.3( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[5.16( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.558310509s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 97.693946838s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.7( v 39'18 (0'0,39'18] local-lis/les=49/51 n=1 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.914243698s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 95.049942017s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.7( v 39'18 (0'0,39'18] local-lis/les=49/51 n=1 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.914211273s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 95.049942017s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.071249008s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 93.207008362s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.071228027s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 93.207008362s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.17( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.945831299s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 103.158439636s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.17( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.945802689s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 103.158439636s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[5.9( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.558064461s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 97.694000244s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.6( v 39'18 (0'0,39'18] local-lis/les=49/51 n=1 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.914050102s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 95.050003052s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[5.9( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.558043480s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 97.694000244s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.6( v 39'18 (0'0,39'18] local-lis/les=49/51 n=1 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.914028168s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 95.050003052s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.070829391s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 93.206855774s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.070804596s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 93.206855774s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[5.13( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.557616234s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 97.693763733s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.4( v 39'18 (0'0,39'18] local-lis/les=49/51 n=1 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.913859367s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 95.050003052s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[7.1b( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.587019920s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 106.799766541s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.4( v 39'18 (0'0,39'18] local-lis/les=49/51 n=1 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.913824081s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 95.050003052s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[7.1b( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.587006569s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 106.799766541s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[3.1f( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=12.538373947s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 103.751167297s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.070609093s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 93.206832886s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[3.1f( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=12.538345337s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 103.751167297s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.070588112s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 93.206832886s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[5.c( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.557887077s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 97.694168091s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[5.c( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.557864189s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 97.694168091s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.8( v 39'18 (0'0,39'18] local-lis/les=49/51 n=1 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.913722992s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 95.050086975s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[5.7( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.557864189s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 97.694259644s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.8( v 39'18 (0'0,39'18] local-lis/les=49/51 n=1 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.913702965s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 95.050086975s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[5.7( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.557844162s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 97.694259644s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.7( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.070295334s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 93.206787109s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.7( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.070279121s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 93.206787109s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.9( v 51'19 (0'0,51'19] local-lis/les=49/51 n=1 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.913636208s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 active pruub 95.050254822s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[5.f( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.557652473s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 97.694274902s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[5.f( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.557628632s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 97.694274902s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.8( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.069909096s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 93.206565857s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.9( v 51'19 (0'0,51'19] local-lis/les=49/51 n=1 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.913599014s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY pruub 95.050254822s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.8( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.069884300s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 93.206565857s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.2( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.069811821s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 93.206542969s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.2( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.069795609s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 93.206542969s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[5.5( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.557538033s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 97.694366455s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.b( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.913475037s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 95.050331116s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.3( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.069511414s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 93.206382751s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[5.1e( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.752880096s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 100.966964722s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.752863884s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 100.966964722s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[7.1a( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.585552216s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 106.799720764s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[8.15( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.585614204s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 106.799812317s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[7.1a( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.585536003s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 106.799720764s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[8.15( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.585602760s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 106.799812317s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.15( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.956349373s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 103.170692444s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.15( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.956330299s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 103.170692444s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[3.1d( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=12.531627655s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 103.746009827s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[3.1d( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=12.531604767s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 103.746009827s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.760513306s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 100.975082397s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.760477066s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 100.975082397s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.14( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.956022263s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 103.170684814s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.14( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.956007004s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 103.170684814s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[5.5( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.557515144s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 97.694366455s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[5.4( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.557507515s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 97.694396973s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.b( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.913451195s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 95.050331116s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.3( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.069483757s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 93.206382751s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[5.4( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.557448387s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 97.694396973s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.f( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.913279533s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 95.050285339s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.4( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.069831848s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 93.206878662s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.f( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.913250923s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 95.050285339s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.4( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.069812775s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 93.206878662s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[3.1e( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=12.536883354s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 103.751701355s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[8.14( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.584952354s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 106.799812317s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[5.3( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.557224274s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 97.694313049s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[5.3( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.557206154s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 97.694313049s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.5( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.069239616s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 93.206405640s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.d( v 51'19 (0'0,51'19] local-lis/les=49/51 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.913189888s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 active pruub 95.050369263s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.5( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.069224358s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 93.206405640s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[5.2( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.557164192s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 97.694374084s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.d( v 51'19 (0'0,51'19] local-lis/les=49/51 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.913151741s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY pruub 95.050369263s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.e( v 51'19 (0'0,51'19] local-lis/les=49/51 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.913129807s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 active pruub 95.050361633s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.e( v 51'19 (0'0,51'19] local-lis/les=49/51 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.913049698s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY pruub 95.050361633s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.1( v 39'18 (0'0,39'18] local-lis/les=49/51 n=1 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.912862778s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 95.050338745s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.1( v 39'18 (0'0,39'18] local-lis/les=49/51 n=1 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.912842751s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 95.050338745s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.9( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.068605423s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 93.206230164s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.9( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.068588257s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 93.206230164s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[5.2( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.557147980s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 97.694374084s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.2( v 39'18 (0'0,39'18] local-lis/les=49/51 n=1 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.912549019s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 95.050407410s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.2( v 39'18 (0'0,39'18] local-lis/les=49/51 n=1 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.912530899s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 95.050407410s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.13( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.912301064s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 95.050384521s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.13( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.912278175s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 95.050384521s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.a( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.068367958s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 93.206192017s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.a( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.068058014s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 93.206192017s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.14( v 51'19 (0'0,51'19] local-lis/les=49/51 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.912160873s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 active pruub 95.050407410s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.1b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.068002701s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 93.206237793s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.14( v 51'19 (0'0,51'19] local-lis/les=49/51 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.912140846s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY pruub 95.050407410s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.1b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.067982674s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 93.206237793s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[5.1( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.556130409s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 97.694488525s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[5.1( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.556107521s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 97.694488525s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.15( v 51'19 (0'0,51'19] local-lis/les=49/51 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.911985397s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 active pruub 95.050392151s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.15( v 51'19 (0'0,51'19] local-lis/les=49/51 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.911955833s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY pruub 95.050392151s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[5.1a( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.556254387s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 97.694824219s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.1d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.058243752s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 93.196815491s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.1c( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.058154106s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 93.196784973s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[5.1a( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.556208611s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 97.694824219s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.1d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.058196068s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 93.196815491s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.1c( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.058133125s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 93.196784973s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.16( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.911645889s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 95.050430298s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.16( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.911623001s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 95.050430298s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[5.19( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.555947304s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 97.694801331s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[5.19( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.555925369s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 97.694801331s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.17( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.911453247s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 95.050415039s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.1f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.057732582s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 93.196739197s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[10.17( v 39'18 (0'0,39'18] local-lis/les=49/51 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53 pruub=10.911431313s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 95.050415039s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.1f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.057708740s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 93.196739197s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[5.18( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.555303574s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 97.694526672s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[5.18( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.555282593s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 97.694526672s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[2.19( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.6( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.068983078s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 93.206222534s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[2.6( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.066355705s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 93.206222534s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[7.18( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.584806442s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 106.799674988s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[3.1e( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=12.536852837s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 103.751701355s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[8.14( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.584933281s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 106.799812317s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[7.18( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.584780693s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 106.799674988s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[7.1f( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.584546089s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 106.799552917s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[7.1f( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.584533691s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 106.799552917s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[3.1b( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=12.536427498s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 103.751457214s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[8.10( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.584475517s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 106.799514771s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[8.10( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.584451675s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 106.799514771s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[8.15( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.760087967s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 100.975173950s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.12( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.955630302s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 103.170753479s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.760067940s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 100.975173950s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.12( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.955618858s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 103.170753479s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[2.18( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[10.1e( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[3.1b( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=12.536354065s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 103.751457214s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[8.11( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.584432602s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 106.799690247s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[8.11( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.584413528s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 106.799690247s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.11( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.955450058s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 103.170867920s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.11( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.955426216s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 103.170867920s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.759935379s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 100.975425720s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.759918213s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 100.975425720s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[8.12( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.583957672s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 106.799507141s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.10( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.955463409s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 103.171066284s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[8.12( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.583888054s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 106.799507141s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.10( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.955442429s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 103.171066284s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[3.18( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=12.535683632s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 103.751342773s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[3.18( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=12.535668373s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 103.751342773s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.f( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.955233574s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 103.171195984s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.f( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.955216408s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 103.171195984s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[7.3( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.583445549s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 106.799545288s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[7.3( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.583428383s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 106.799545288s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[7.1c( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.583242416s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 106.799407959s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[7.1c( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.583188057s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 106.799407959s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[3.7( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=12.535103798s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 103.751358032s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[3.7( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=12.535084724s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 103.751358032s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[8.c( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.582620621s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 106.799102783s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.758889198s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 100.975402832s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[8.c( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.582596779s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 106.799102783s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.e( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.954573631s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 103.171119690s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.e( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.954552650s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 103.171119690s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.758733749s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 100.975402832s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[7.2( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.582299232s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 106.799095154s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[3.6( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=12.534569740s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 103.751396179s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[8.d( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.582197189s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 106.799072266s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[7.1a( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[11.15( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[7.1( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.581576347s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 106.799087524s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[7.2( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.581601143s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 106.799095154s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.d( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.953515053s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 103.171119690s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.d( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.953495026s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 103.171119690s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[7.1( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.581520081s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 106.799087524s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[3.5( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=12.533733368s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 103.751441956s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[3.5( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=12.533709526s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 103.751441956s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[3.6( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=12.533570290s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 103.751396179s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[3.1d( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.757540703s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 100.975463867s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.757518768s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 100.975463867s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.b( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.953203201s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 103.171203613s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[3.3( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=12.533499718s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 103.751518250s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[3.3( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=12.533477783s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 103.751518250s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[2.16( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[5.14( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[2.13( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[3.1e( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.b( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.953141212s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 103.171203613s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[5.15( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[8.d( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.580893517s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 106.799072266s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[7.5( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.580758095s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 106.799087524s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[7.5( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.580734253s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 106.799087524s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[8.e( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.580617905s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 106.799087524s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[3.1( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=12.533524513s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 103.751907349s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[8.e( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.580595970s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 106.799087524s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[3.1( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=12.533352852s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 103.751907349s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.757083893s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 100.975708008s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.757061958s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 100.975708008s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.757089615s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 100.975784302s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.9( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.952534676s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 103.171241760s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[7.c( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.571208954s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 106.789947510s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.757062912s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 100.975784302s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[7.c( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.571185112s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 106.789947510s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[3.8( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=12.532889366s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 103.751739502s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[3.8( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=12.532869339s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 103.751739502s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.2( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.952380180s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 103.171272278s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.2( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.952357292s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 103.171272278s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[7.e( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.570945740s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 106.789955139s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[3.a( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=12.532717705s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 103.751754761s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[3.a( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=12.532693863s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 103.751754761s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[7.f( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.570748329s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 106.789871216s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[7.f( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.570734024s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 106.789871216s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[8.11( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.756502151s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 100.975753784s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.756484032s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 100.975753784s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.3( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.951941490s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 103.171287537s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.3( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.951920509s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 103.171287537s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[8.f( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.570411682s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 106.789802551s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[8.f( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.570386887s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 106.789802551s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.8( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.951944351s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 103.171394348s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.8( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.951927185s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 103.171394348s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[11.11( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[7.4( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.570288658s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 106.789802551s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[7.4( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.570260048s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 106.789802551s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.9( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.952240944s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 103.171241760s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[2.11( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[8.12( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[7.6( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.570152283s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 106.789817810s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.1( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.951785088s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 103.171531677s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[7.6( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.570124626s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 106.789817810s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.1( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.951770782s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 103.171531677s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[8.9( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.570024490s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 106.789794922s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[8.9( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.570002556s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 106.789794922s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[8.b( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.569992065s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 106.789886475s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[8.b( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.569964409s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 106.789886475s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[3.9( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=12.531990051s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 103.751953125s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[8.2( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.569793701s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 106.789779663s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[3.9( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=12.531969070s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 103.751953125s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.755804062s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 100.975837708s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[8.2( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.569769859s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 106.789779663s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.755790710s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 100.975837708s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.4( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.951312065s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 103.171424866s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[10.7( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.4( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.951292992s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 103.171424866s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[7.8( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.569628716s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 106.789764404s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[7.8( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.569549561s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 106.789764404s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[3.c( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=12.531789780s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 103.752029419s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[3.c( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=12.531701088s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 103.752029419s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[11.12( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[5.1d( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[2.f( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[3.18( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[10.4( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[7.1c( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[7.e( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.570914268s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 106.789955139s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[10.10( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[2.b( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[3.7( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[7.9( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.568419456s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 106.789756775s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[7.9( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.568360329s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 106.789756775s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[10.8( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[2.17( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[7.2( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[11.d( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[5.7( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[10.9( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[7.1( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[3.5( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[11.b( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[10.11( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[8.6( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.567330360s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 106.789894104s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.753291130s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 100.975883484s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[8.d( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[8.6( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.567311287s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 106.789894104s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.753266335s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 100.975883484s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[7.a( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.566745758s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 106.789443970s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[3.e( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=12.529268265s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 103.751976013s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[7.a( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.566733360s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 106.789443970s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[3.e( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=12.529252052s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 103.751976013s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[2.8( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[2.2( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[5.11( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.6( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.948378563s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 103.171508789s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[8.4( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.566259384s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 106.789428711s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[3.f( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=12.528787613s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 103.751960754s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[7.5( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.6( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.948348045s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 103.171508789s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[3.f( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=12.528756142s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 103.751960754s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[8.4( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.566228867s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 106.789428711s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[9.5( v 50'484 (0'0,50'484] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.752550125s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 39'483 active pruub 100.975906372s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[9.5( v 50'484 (0'0,50'484] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.752509117s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 39'483 unknown NOTIFY pruub 100.975906372s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[8.1b( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.566006660s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 106.789436340s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[8.1b( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.565982819s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 106.789436340s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.19( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.948050499s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 103.171569824s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.19( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.948027611s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 103.171569824s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[7.15( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.565864563s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 106.789421082s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.18( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.947935104s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 103.171539307s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.18( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.947913170s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 103.171539307s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[7.c( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[3.11( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=12.528314590s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 103.751998901s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[3.8( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[3.11( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=12.528294563s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 103.751998901s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.752118111s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 100.975952148s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.1a( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.947778702s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 103.171623230s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.752094269s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 100.975952148s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.1a( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.947759628s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 103.171623230s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[11.2( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[7.15( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.565846443s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 106.789421082s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[11.3( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[3.12( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=12.527912140s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 103.752052307s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[5.12( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[3.12( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=12.527889252s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 103.752052307s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.1b( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.947237015s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 103.171577454s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[8.18( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.565040588s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 106.789428711s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.1b( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.947214127s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 103.171577454s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[8.1a( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.564997673s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 106.789421082s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[8.18( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.565012932s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 106.789428711s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[11.8( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[8.1a( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.564975739s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 106.789421082s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[11.9( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[8.1f( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.564843178s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 106.789405823s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.1c( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.947056770s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 103.171630859s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[8.1f( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.564826012s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 106.789405823s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.1c( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.947036743s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 103.171630859s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[8.2( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[5.5( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[5.4( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[7.8( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[10.12( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[7.e( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.750535965s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 100.975990295s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.750489235s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 100.975982666s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.750513077s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 100.975990295s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[5.3( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[7.a( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[10.e( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[3.e( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[8.4( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[10.1( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.750467300s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 100.975982666s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[5.2( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[8.1b( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[11.18( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[7.11( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.563387871s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 106.788978577s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[7.11( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.563368797s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 106.788978577s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[3.15( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=12.526690483s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 103.752357483s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[3.15( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=12.526668549s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 103.752357483s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[2.15( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.1e( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.945511818s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 103.171653748s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.1e( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.945491791s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 103.171653748s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[10.19( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[10.1a( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[3.11( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[5.16( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[10.15( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[3.16( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=12.524942398s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 103.752235413s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[3.16( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=12.524898529s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 103.752235413s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[11.1a( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[8.1d( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.561156273s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 106.788635254s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.1f( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.944321632s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 103.171829224s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[8.1d( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.561129570s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 106.788635254s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[11.1f( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.944305420s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 103.171829224s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.748397827s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 100.976036072s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.748385429s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 100.976036072s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[7.15( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[5.9( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[2.1d( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[11.1b( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[7.13( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.561366081s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 106.789413452s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[7.13( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.561346054s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 106.789413452s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[11.1c( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[3.17( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=12.524177551s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 103.752357483s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[3.17( empty local-lis/les=43/45 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53 pruub=12.524152756s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 103.752357483s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[8.1c( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.552196503s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 106.780464172s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[7.11( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[10.16( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[10.6( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[8.1c( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.552066803s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 106.780464172s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[11.1e( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[2.d( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[3.16( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[5.13( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[11.1f( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[10.17( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 53 pg[8.1c( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[5.c( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[2.7( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[5.f( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[2.1f( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[10.b( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[2.3( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[10.f( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[10.d( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[2.1c( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[11.17( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[2.4( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[2.5( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[2.9( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[10.2( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[7.1b( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[3.1f( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[10.13( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[11.14( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[8.14( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[2.a( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[7.18( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[10.14( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[7.1f( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[2.1b( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[5.1( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[8.10( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[5.1a( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[5.19( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[9.11( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[3.1b( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[5.18( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[11.10( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 53 pg[2.6( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[11.f( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[7.3( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[8.c( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[11.e( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[9.d( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[3.6( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[3.3( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[8.e( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[3.1( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[9.9( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[9.b( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[3.a( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[7.f( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[9.1( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[8.f( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[7.4( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[7.6( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[11.1( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[8.9( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[8.b( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[3.9( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[9.3( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[11.4( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[3.c( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[7.9( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[8.6( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[11.6( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[3.f( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[9.5( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[11.19( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[9.1b( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[3.12( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[8.18( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[8.1a( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[8.1f( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[3.15( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[8.1d( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[9.1d( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[3.17( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 53 pg[7.13( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Jan 31 05:00:01 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Jan 31 05:00:02 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Jan 31 05:00:02 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Jan 31 05:00:02 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[8.15( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[7.1a( empty local-lis/les=53/54 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[3.1e( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[4.18( empty local-lis/les=53/54 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[4.1b( empty local-lis/les=53/54 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[11.15( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[3.1d( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[4.1a( empty local-lis/les=53/54 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[8.11( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[11.12( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[11.3( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[7.c( empty local-lis/les=53/54 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[3.8( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[3.7( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[3.5( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[11.d( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[7.1( empty local-lis/les=53/54 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[11.8( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[11.b( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[8.2( v 32'6 (0'0,32'6] local-lis/les=53/54 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[4.e( empty local-lis/les=53/54 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[7.2( empty local-lis/les=53/54 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[4.1( empty local-lis/les=53/54 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[8.d( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[11.9( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[7.5( empty local-lis/les=53/54 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[7.e( empty local-lis/les=53/54 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[11.2( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[4.a( empty local-lis/les=53/54 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[7.8( empty local-lis/les=53/54 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[8.4( v 32'6 (0'0,32'6] local-lis/les=53/54 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[3.e( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[7.a( empty local-lis/les=53/54 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[7.15( empty local-lis/les=53/54 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[3.11( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[11.18( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[8.1b( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[11.1b( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[11.1a( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[7.11( empty local-lis/les=53/54 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[9.11( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[9.11( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[9.5( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[9.5( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[9.b( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[9.b( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[11.1c( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[4.13( empty local-lis/les=53/54 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[11.1f( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[8.1c( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[11.1e( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[4.11( empty local-lis/les=53/54 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[3.16( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[11.11( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[8.12( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[3.18( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[7.1c( empty local-lis/les=53/54 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 54 pg[4.1c( empty local-lis/les=53/54 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[9.9( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[9.9( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[9.d( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[9.1( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[9.1( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[9.3( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[9.3( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[9.d( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[9.1d( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[9.1d( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:02 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 05:00:02 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 05:00:02 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 05:00:02 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 05:00:02 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 05:00:02 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 05:00:02 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 05:00:02 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 05:00:02 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 05:00:02 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[9.1b( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[9.1b( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[5.15( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[2.11( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[10.16( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[2.16( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[10.1( v 39'18 (0'0,39'18] local-lis/les=53/54 n=1 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[10.1e( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[2.8( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[5.2( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[10.e( v 51'19 lc 36'4 (0'0,51'19] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=51'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[2.1f( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[10.d( v 51'19 lc 36'5 (0'0,51'19] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=51'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[5.3( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[5.5( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[10.17( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[2.f( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[5.14( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[10.7( v 39'18 (0'0,39'18] local-lis/les=53/54 n=1 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[2.b( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[2.1c( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[5.4( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[10.4( v 39'18 (0'0,39'18] local-lis/les=53/54 n=1 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[2.1d( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[10.15( v 51'19 lc 36'3 (0'0,51'19] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=51'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[2.2( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[10.8( v 39'18 (0'0,39'18] local-lis/les=53/54 n=1 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[5.7( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[10.9( v 51'19 lc 36'8 (0'0,51'19] local-lis/les=53/54 n=1 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=51'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[5.1e( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[2.18( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[2.19( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[11.10( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[3.f( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[8.b( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[2.13( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[7.4( empty local-lis/les=53/54 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[11.4( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[8.10( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[3.c( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[3.1( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[7.9( empty local-lis/les=53/54 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[11.14( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[7.18( empty local-lis/les=53/54 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[7.6( empty local-lis/les=53/54 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[3.1b( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[8.6( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=53/54 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[9.5( v 50'484 (0'0,50'484] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[9.5( v 50'484 (0'0,50'484] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[7.1f( empty local-lis/les=53/54 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[11.6( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[3.3( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[11.e( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[8.f( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[3.6( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[8.e( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[11.f( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[7.3( empty local-lis/les=53/54 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[8.c( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[7.f( empty local-lis/les=53/54 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[3.a( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[3.9( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[7.13( empty local-lis/les=53/54 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[3.17( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[8.9( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[11.1( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[8.1d( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[8.1f( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[8.18( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[3.15( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[3.12( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[11.19( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[8.1a( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[11.17( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[7.1b( empty local-lis/les=53/54 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[3.1f( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 54 pg[8.14( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[5.11( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[2.17( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[5.13( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[2.15( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[5.12( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[10.1a( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[10.19( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[10.6( v 39'18 (0'0,39'18] local-lis/les=53/54 n=1 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[5.9( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[2.d( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[5.16( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[5.f( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[10.b( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[2.5( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[2.3( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[10.2( v 39'18 (0'0,39'18] local-lis/les=53/54 n=1 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[5.c( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[2.a( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[2.7( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[10.f( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[5.1( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[2.4( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[2.6( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[10.11( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[2.9( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[10.13( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[10.10( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[5.1d( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[10.12( v 51'19 lc 39'17 (0'0,51'19] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=51'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[2.1b( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[10.14( v 51'19 lc 36'7 (0'0,51'19] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=51'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[5.1a( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[5.19( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[4.8( empty local-lis/les=53/54 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[4.7( empty local-lis/les=53/54 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[6.5( v 34'39 lc 32'9 (0'0,34'39] local-lis/les=53/54 n=2 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=34'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[6.3( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=53/54 n=2 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=34'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[4.2( empty local-lis/les=53/54 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[4.4( empty local-lis/les=53/54 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[6.9( v 34'39 (0'0,34'39] local-lis/les=53/54 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[4.f( empty local-lis/les=53/54 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[6.d( v 34'39 lc 32'13 (0'0,34'39] local-lis/les=53/54 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=34'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[6.7( v 34'39 lc 32'21 (0'0,34'39] local-lis/les=53/54 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=34'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[4.d( empty local-lis/les=53/54 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[6.f( v 34'39 lc 32'1 (0'0,34'39] local-lis/les=53/54 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=34'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[6.b( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=53/54 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=34'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[5.18( empty local-lis/les=53/54 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[4.9( empty local-lis/les=53/54 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[4.5( empty local-lis/les=53/54 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[4.14( empty local-lis/les=53/54 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[4.12( empty local-lis/les=53/54 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[6.1( v 34'39 (0'0,34'39] local-lis/les=53/54 n=2 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 54 pg[4.10( empty local-lis/les=53/54 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:03 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v115: 305 pgs: 1 active+recovery_wait+degraded, 1 active+recovering, 303 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 2/249 objects degraded (0.803%); 1/249 objects misplaced (0.402%); 0 B/s, 0 objects/s recovering
Jan 31 05:00:03 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Jan 31 05:00:03 np0005603787 ceph-mon[75160]: log_channel(cluster) log [WRN] : Health check failed: Degraded data redundancy: 2/249 objects degraded (0.803%), 1 pg degraded (PG_DEGRADED)
Jan 31 05:00:03 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Jan 31 05:00:03 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Jan 31 05:00:03 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 55 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:03 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 55 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:03 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 55 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:03 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 55 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:03 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 55 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:03 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 55 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:03 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 55 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:03 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 55 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:03 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 55 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:03 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 55 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:03 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 55 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:03 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 55 pg[9.5( v 50'484 (0'0,50'484] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=50'484 lcod 39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:03 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 55 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:03 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 55 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:03 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 55 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:03 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 55 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:04 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Jan 31 05:00:04 np0005603787 ceph-mon[75160]: Health check failed: Degraded data redundancy: 2/249 objects degraded (0.803%), 1 pg degraded (PG_DEGRADED)
Jan 31 05:00:04 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Jan 31 05:00:04 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Jan 31 05:00:04 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.013037682s) [0] async=[0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 active pruub 109.275222778s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:04 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.012906075s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.275222778s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:04 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 56 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.014030457s) [0] async=[0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 active pruub 109.277023315s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:04 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 56 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.013910294s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.277023315s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:04 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.013779640s) [0] async=[0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 active pruub 109.277000427s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:04 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.013618469s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.277000427s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:04 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 56 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.013606071s) [0] async=[0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 active pruub 109.277153015s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:04 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 56 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.013518333s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.277153015s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:04 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 56 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.013146400s) [0] async=[0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 active pruub 109.277130127s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:04 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 56 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.013077736s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.277130127s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:04 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 56 pg[9.5( v 55'485 (0'0,55'485] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.012673378s) [0] async=[0] r=-1 lpr=56 pi=[49,56)/1 crt=50'484 lcod 50'484 active pruub 109.277427673s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:04 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 56 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.012141228s) [0] async=[0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 active pruub 109.276985168s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:04 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 56 pg[9.5( v 55'485 (0'0,55'485] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.012590408s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=50'484 lcod 50'484 unknown NOTIFY pruub 109.277427673s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:04 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 56 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.012095451s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.276985168s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:04 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 56 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.012018204s) [0] async=[0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 active pruub 109.277168274s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:04 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 56 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.011959076s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.277168274s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:04 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 56 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.011679649s) [0] async=[0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 active pruub 109.277015686s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:04 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 56 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.011630058s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.277015686s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:04 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:04 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:04 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:04 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:04 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 56 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:04 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 56 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:04 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 56 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:04 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 56 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:04 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 56 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:04 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 56 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:04 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 56 pg[9.5( v 55'485 (0'0,55'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 pct=0'0 crt=50'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:04 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 56 pg[9.5( v 55'485 (0'0,55'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=50'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:04 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 56 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:04 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 56 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:04 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 56 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:04 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 56 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:04 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 56 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:04 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 56 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:05 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Jan 31 05:00:05 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Jan 31 05:00:05 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v118: 305 pgs: 1 active+recovery_wait+degraded, 1 active+recovering, 303 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 2/249 objects degraded (0.803%); 1/249 objects misplaced (0.402%); 108 B/s, 1 objects/s recovering
Jan 31 05:00:05 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Jan 31 05:00:05 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Jan 31 05:00:05 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Jan 31 05:00:05 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 57 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=13.998369217s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 109.277168274s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:05 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=13.999269485s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 109.278259277s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:05 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=13.999166489s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.278259277s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:05 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 57 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=13.998830795s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 109.278198242s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:05 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 57 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=13.998688698s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.278198242s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:05 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 57 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=13.998250961s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.277168274s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:05 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=13.998640060s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 109.278251648s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:05 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=13.997867584s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 109.277511597s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:05 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=13.997825623s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.277511597s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:05 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=13.998517036s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.278251648s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:05 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 57 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:05 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 57 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:05 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:05 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:05 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=13.996122360s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 109.277198792s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:05 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=13.996060371s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.277198792s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:05 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=13.995732307s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 109.277191162s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:05 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=13.995664597s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.277191162s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:05 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:05 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 57 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:05 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 57 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:05 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:05 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:05 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:05 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:05 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:05 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:05 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:05 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=56/57 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:05 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 57 pg[9.5( v 55'485 (0'0,55'485] local-lis/les=56/57 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=55'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:05 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 57 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:05 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 57 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=56/57 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:05 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=56/57 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:05 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:05 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:05 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=56/57 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:05 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:05 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Jan 31 05:00:05 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Jan 31 05:00:06 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Jan 31 05:00:06 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Jan 31 05:00:06 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 6.a scrub starts
Jan 31 05:00:06 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 6.a scrub ok
Jan 31 05:00:06 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:00:06 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Jan 31 05:00:06 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Jan 31 05:00:06 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Jan 31 05:00:06 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 58 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:06 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 58 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:06 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 58 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:06 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 58 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:06 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 58 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:06 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 58 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:06 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 58 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:07 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v121: 305 pgs: 7 active+remapped, 1 active+recovery_wait+degraded, 1 active+recovering, 296 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 2/249 objects degraded (0.803%); 1/249 objects misplaced (0.402%); 875 B/s, 2 keys/s, 14 objects/s recovering
Jan 31 05:00:08 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Jan 31 05:00:08 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Jan 31 05:00:08 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Jan 31 05:00:08 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Jan 31 05:00:09 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v122: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 997 B/s, 1 keys/s, 20 objects/s recovering
Jan 31 05:00:09 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0)
Jan 31 05:00:09 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 31 05:00:09 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Jan 31 05:00:09 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 31 05:00:09 np0005603787 ceph-mgr[75453]: [progress INFO root] Completed event 9372f1f3-9deb-49a0-98e2-c38da874382f (Global Recovery Event) in 20 seconds
Jan 31 05:00:09 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Jan 31 05:00:09 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Jan 31 05:00:09 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Jan 31 05:00:09 np0005603787 ceph-mon[75160]: log_channel(cluster) log [INF] : Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 2/249 objects degraded (0.803%), 1 pg degraded)
Jan 31 05:00:09 np0005603787 ceph-mon[75160]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 31 05:00:09 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 31 05:00:09 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 31 05:00:09 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Jan 31 05:00:09 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Jan 31 05:00:10 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 31 05:00:10 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 31 05:00:10 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 59 pg[6.a( v 34'39 (0'0,34'39] local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=12.771222115s) [1] r=-1 lpr=59 pi=[45,59)/1 crt=34'39 lcod 0'0 active pruub 118.959503174s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:10 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 59 pg[6.a( v 34'39 (0'0,34'39] local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=12.771071434s) [1] r=-1 lpr=59 pi=[45,59)/1 crt=34'39 lcod 0'0 unknown NOTIFY pruub 118.959503174s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:10 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 59 pg[6.6( v 34'39 (0'0,34'39] local-lis/les=45/46 n=2 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=12.771059036s) [1] r=-1 lpr=59 pi=[45,59)/1 crt=34'39 lcod 0'0 active pruub 118.959655762s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:10 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 59 pg[6.6( v 34'39 (0'0,34'39] local-lis/les=45/46 n=2 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=12.771037102s) [1] r=-1 lpr=59 pi=[45,59)/1 crt=34'39 lcod 0'0 unknown NOTIFY pruub 118.959655762s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:10 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 59 pg[6.2( v 34'39 (0'0,34'39] local-lis/les=45/46 n=2 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=12.770806313s) [1] r=-1 lpr=59 pi=[45,59)/1 crt=34'39 lcod 0'0 active pruub 118.959663391s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:10 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 59 pg[6.2( v 34'39 (0'0,34'39] local-lis/les=45/46 n=2 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=12.770790100s) [1] r=-1 lpr=59 pi=[45,59)/1 crt=34'39 lcod 0'0 unknown NOTIFY pruub 118.959663391s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:10 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 59 pg[6.e( v 34'39 (0'0,34'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=12.770766258s) [1] r=-1 lpr=59 pi=[45,59)/1 crt=34'39 lcod 0'0 active pruub 118.959793091s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:10 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 59 pg[6.e( v 34'39 (0'0,34'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=12.770710945s) [1] r=-1 lpr=59 pi=[45,59)/1 crt=34'39 lcod 0'0 unknown NOTIFY pruub 118.959793091s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:10 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 59 pg[6.6( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:10 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 59 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:10 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 59 pg[6.2( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:10 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 59 pg[6.e( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:11 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v124: 305 pgs: 4 peering, 301 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 864 B/s, 1 keys/s, 19 objects/s recovering
Jan 31 05:00:11 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Jan 31 05:00:11 np0005603787 ceph-mon[75160]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 2/249 objects degraded (0.803%), 1 pg degraded)
Jan 31 05:00:11 np0005603787 ceph-mon[75160]: Cluster is now healthy
Jan 31 05:00:11 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 31 05:00:11 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 31 05:00:11 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Jan 31 05:00:11 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Jan 31 05:00:11 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 60 pg[6.a( v 34'39 (0'0,34'39] local-lis/les=59/60 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:11 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 60 pg[6.2( v 34'39 (0'0,34'39] local-lis/les=59/60 n=2 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:11 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 60 pg[6.e( v 34'39 lc 32'19 (0'0,34'39] local-lis/les=59/60 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=34'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:11 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 60 pg[6.6( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=59/60 n=2 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=34'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:11 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Jan 31 05:00:11 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Jan 31 05:00:13 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v126: 305 pgs: 4 peering, 301 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 322 B/s, 8 objects/s recovering
Jan 31 05:00:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:00:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:00:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:00:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:00:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:00:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:00:14 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 4.b scrub starts
Jan 31 05:00:14 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 4.b scrub ok
Jan 31 05:00:14 np0005603787 ceph-mgr[75453]: [progress INFO root] Writing back 17 completed events
Jan 31 05:00:14 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 31 05:00:14 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:00:14 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:00:14 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Jan 31 05:00:14 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Jan 31 05:00:15 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v127: 305 pgs: 4 peering, 301 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 264 B/s, 7 objects/s recovering
Jan 31 05:00:15 np0005603787 python3[98144]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:00:15 np0005603787 podman[98145]: 2026-01-31 10:00:15.127031287 +0000 UTC m=+0.032312053 container create 04ea67a0b0f7bb4f2c26d8155bcbe77d931be4e27116c9b841947b2646edbccb (image=quay.io/ceph/ceph:v20, name=modest_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 05:00:15 np0005603787 systemd[76537]: Starting Mark boot as successful...
Jan 31 05:00:15 np0005603787 systemd[1]: Started libpod-conmon-04ea67a0b0f7bb4f2c26d8155bcbe77d931be4e27116c9b841947b2646edbccb.scope.
Jan 31 05:00:15 np0005603787 systemd[76537]: Finished Mark boot as successful.
Jan 31 05:00:15 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:00:15 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93b78be99eb1538669cff2f346e338c2af968bb8e490cf6aa2554a876102d03c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:00:15 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93b78be99eb1538669cff2f346e338c2af968bb8e490cf6aa2554a876102d03c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:00:15 np0005603787 podman[98145]: 2026-01-31 10:00:15.168245201 +0000 UTC m=+0.073525987 container init 04ea67a0b0f7bb4f2c26d8155bcbe77d931be4e27116c9b841947b2646edbccb (image=quay.io/ceph/ceph:v20, name=modest_hofstadter, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 05:00:15 np0005603787 podman[98145]: 2026-01-31 10:00:15.17224369 +0000 UTC m=+0.077524456 container start 04ea67a0b0f7bb4f2c26d8155bcbe77d931be4e27116c9b841947b2646edbccb (image=quay.io/ceph/ceph:v20, name=modest_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 05:00:15 np0005603787 podman[98145]: 2026-01-31 10:00:15.17517415 +0000 UTC m=+0.080454906 container attach 04ea67a0b0f7bb4f2c26d8155bcbe77d931be4e27116c9b841947b2646edbccb (image=quay.io/ceph/ceph:v20, name=modest_hofstadter, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:00:15 np0005603787 podman[98145]: 2026-01-31 10:00:15.114168506 +0000 UTC m=+0.019449292 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:00:15 np0005603787 modest_hofstadter[98161]: could not fetch user info: no user info saved
Jan 31 05:00:15 np0005603787 systemd[1]: libpod-04ea67a0b0f7bb4f2c26d8155bcbe77d931be4e27116c9b841947b2646edbccb.scope: Deactivated successfully.
Jan 31 05:00:15 np0005603787 conmon[98161]: conmon 04ea67a0b0f7bb4f2c26 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-04ea67a0b0f7bb4f2c26d8155bcbe77d931be4e27116c9b841947b2646edbccb.scope/container/memory.events
Jan 31 05:00:15 np0005603787 podman[98145]: 2026-01-31 10:00:15.368970659 +0000 UTC m=+0.274251425 container died 04ea67a0b0f7bb4f2c26d8155bcbe77d931be4e27116c9b841947b2646edbccb (image=quay.io/ceph/ceph:v20, name=modest_hofstadter, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:00:15 np0005603787 systemd[1]: var-lib-containers-storage-overlay-93b78be99eb1538669cff2f346e338c2af968bb8e490cf6aa2554a876102d03c-merged.mount: Deactivated successfully.
Jan 31 05:00:15 np0005603787 podman[98145]: 2026-01-31 10:00:15.406577935 +0000 UTC m=+0.311858711 container remove 04ea67a0b0f7bb4f2c26d8155bcbe77d931be4e27116c9b841947b2646edbccb (image=quay.io/ceph/ceph:v20, name=modest_hofstadter, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 05:00:15 np0005603787 systemd[1]: libpod-conmon-04ea67a0b0f7bb4f2c26d8155bcbe77d931be4e27116c9b841947b2646edbccb.scope: Deactivated successfully.
Jan 31 05:00:15 np0005603787 python3[98284]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid 962d77ae-dc67-5de8-89d8-3d1670c67b61 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:00:15 np0005603787 podman[98285]: 2026-01-31 10:00:15.697785191 +0000 UTC m=+0.032332993 container create 634eb0ad6fb7ffb5f96c793cb8edfa6f77742f5e26be1ed8da81f3ef4ec57cd2 (image=quay.io/ceph/ceph:v20, name=hopeful_chebyshev, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 05:00:15 np0005603787 systemd[1]: Started libpod-conmon-634eb0ad6fb7ffb5f96c793cb8edfa6f77742f5e26be1ed8da81f3ef4ec57cd2.scope.
Jan 31 05:00:15 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:00:15 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c37592d8ac376e5e5eaa225bd8fa74e3b86c68c3084821a10129e6aad8598342/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:00:15 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c37592d8ac376e5e5eaa225bd8fa74e3b86c68c3084821a10129e6aad8598342/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:00:15 np0005603787 podman[98285]: 2026-01-31 10:00:15.756205065 +0000 UTC m=+0.090752887 container init 634eb0ad6fb7ffb5f96c793cb8edfa6f77742f5e26be1ed8da81f3ef4ec57cd2 (image=quay.io/ceph/ceph:v20, name=hopeful_chebyshev, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 05:00:15 np0005603787 podman[98285]: 2026-01-31 10:00:15.761658514 +0000 UTC m=+0.096206316 container start 634eb0ad6fb7ffb5f96c793cb8edfa6f77742f5e26be1ed8da81f3ef4ec57cd2 (image=quay.io/ceph/ceph:v20, name=hopeful_chebyshev, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Jan 31 05:00:15 np0005603787 podman[98285]: 2026-01-31 10:00:15.766011422 +0000 UTC m=+0.100559224 container attach 634eb0ad6fb7ffb5f96c793cb8edfa6f77742f5e26be1ed8da81f3ef4ec57cd2 (image=quay.io/ceph/ceph:v20, name=hopeful_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 05:00:15 np0005603787 podman[98285]: 2026-01-31 10:00:15.683760519 +0000 UTC m=+0.018308351 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]: {
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:    "user_id": "openstack",
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:    "display_name": "openstack",
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:    "email": "",
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:    "suspended": 0,
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:    "max_buckets": 1000,
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:    "subusers": [],
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:    "keys": [
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:        {
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:            "user": "openstack",
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:            "access_key": "7DD9AAVFMSJIBJUI951T",
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:            "secret_key": "eDPUQFEzGcYWXR0PH4dlUvfF7io5HHBQgMZuYAhl",
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:            "active": true,
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:            "create_date": "2026-01-31T10:00:15.958895Z"
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:        }
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:    ],
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:    "swift_keys": [],
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:    "caps": [],
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:    "op_mask": "read, write, delete",
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:    "default_placement": "",
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:    "default_storage_class": "",
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:    "placement_tags": [],
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:    "bucket_quota": {
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:        "enabled": false,
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:        "check_on_raw": false,
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:        "max_size": -1,
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:        "max_size_kb": 0,
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:        "max_objects": -1
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:    },
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:    "user_quota": {
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:        "enabled": false,
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:        "check_on_raw": false,
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:        "max_size": -1,
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:        "max_size_kb": 0,
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:        "max_objects": -1
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:    },
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:    "temp_url_keys": [],
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:    "type": "rgw",
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:    "mfa_ids": [],
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:    "account_id": "",
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:    "path": "/",
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:    "create_date": "2026-01-31T10:00:15.958662Z",
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:    "tags": [],
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]:    "group_ids": []
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]: }
Jan 31 05:00:15 np0005603787 hopeful_chebyshev[98300]: 
Jan 31 05:00:15 np0005603787 systemd[1]: libpod-634eb0ad6fb7ffb5f96c793cb8edfa6f77742f5e26be1ed8da81f3ef4ec57cd2.scope: Deactivated successfully.
Jan 31 05:00:15 np0005603787 podman[98285]: 2026-01-31 10:00:15.990894579 +0000 UTC m=+0.325442381 container died 634eb0ad6fb7ffb5f96c793cb8edfa6f77742f5e26be1ed8da81f3ef4ec57cd2 (image=quay.io/ceph/ceph:v20, name=hopeful_chebyshev, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:00:16 np0005603787 systemd[1]: var-lib-containers-storage-overlay-c37592d8ac376e5e5eaa225bd8fa74e3b86c68c3084821a10129e6aad8598342-merged.mount: Deactivated successfully.
Jan 31 05:00:16 np0005603787 podman[98285]: 2026-01-31 10:00:16.02830139 +0000 UTC m=+0.362849192 container remove 634eb0ad6fb7ffb5f96c793cb8edfa6f77742f5e26be1ed8da81f3ef4ec57cd2 (image=quay.io/ceph/ceph:v20, name=hopeful_chebyshev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 05:00:16 np0005603787 systemd[1]: libpod-conmon-634eb0ad6fb7ffb5f96c793cb8edfa6f77742f5e26be1ed8da81f3ef4ec57cd2.scope: Deactivated successfully.
Jan 31 05:00:16 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:00:17 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v128: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 6 op/s; 0 B/s, 0 objects/s recovering
Jan 31 05:00:17 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0)
Jan 31 05:00:17 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 31 05:00:17 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Jan 31 05:00:17 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 31 05:00:17 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Jan 31 05:00:17 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 31 05:00:17 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 31 05:00:17 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 31 05:00:17 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 31 05:00:17 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Jan 31 05:00:17 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Jan 31 05:00:17 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 61 pg[6.3( v 34'39 (0'0,34'39] local-lis/les=53/54 n=2 ec=45/20 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=8.913952827s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=34'39 active pruub 116.231231689s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:17 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 61 pg[6.3( v 34'39 (0'0,34'39] local-lis/les=53/54 n=2 ec=45/20 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=8.913870811s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=34'39 unknown NOTIFY pruub 116.231231689s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:17 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 61 pg[6.f( v 34'39 (0'0,34'39] local-lis/les=53/54 n=1 ec=45/20 lis/c=53/53 les/c/f=54/55/0 sis=61 pruub=8.914076805s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=34'39 active pruub 116.231674194s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:17 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 61 pg[6.f( v 34'39 (0'0,34'39] local-lis/les=53/54 n=1 ec=45/20 lis/c=53/53 les/c/f=54/55/0 sis=61 pruub=8.914049149s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=34'39 unknown NOTIFY pruub 116.231674194s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:17 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 61 pg[6.7( v 34'39 (0'0,34'39] local-lis/les=53/54 n=1 ec=45/20 lis/c=53/53 les/c/f=54/55/0 sis=61 pruub=8.913641930s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=34'39 active pruub 116.231613159s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:17 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 61 pg[6.b( v 34'39 (0'0,34'39] local-lis/les=53/54 n=1 ec=45/20 lis/c=53/53 les/c/f=54/55/0 sis=61 pruub=8.913570404s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=34'39 active pruub 116.231697083s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:17 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 61 pg[6.b( v 34'39 (0'0,34'39] local-lis/les=53/54 n=1 ec=45/20 lis/c=53/53 les/c/f=54/55/0 sis=61 pruub=8.913539886s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=34'39 unknown NOTIFY pruub 116.231697083s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:17 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 61 pg[6.7( v 34'39 (0'0,34'39] local-lis/les=53/54 n=1 ec=45/20 lis/c=53/53 les/c/f=54/55/0 sis=61 pruub=8.913472176s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=34'39 unknown NOTIFY pruub 116.231613159s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:17 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 61 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=53/53 les/c/f=54/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:17 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 61 pg[6.3( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:17 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 61 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=53/53 les/c/f=54/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:17 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 61 pg[6.7( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=53/53 les/c/f=54/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:18 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Jan 31 05:00:18 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 31 05:00:18 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 31 05:00:18 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Jan 31 05:00:18 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Jan 31 05:00:18 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 62 pg[6.b( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=61/62 n=1 ec=45/20 lis/c=53/53 les/c/f=54/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=34'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:18 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 62 pg[6.7( v 34'39 lc 32'21 (0'0,34'39] local-lis/les=61/62 n=1 ec=45/20 lis/c=53/53 les/c/f=54/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=34'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:18 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 62 pg[6.3( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=61/62 n=2 ec=45/20 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=34'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:18 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 62 pg[6.f( v 34'39 lc 32'1 (0'0,34'39] local-lis/les=61/62 n=1 ec=45/20 lis/c=53/53 les/c/f=54/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=34'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:19 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v131: 305 pgs: 4 peering, 301 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 269 B/s wr, 26 op/s; 0 B/s, 0 objects/s recovering
Jan 31 05:00:19 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Jan 31 05:00:19 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Jan 31 05:00:20 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Jan 31 05:00:20 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Jan 31 05:00:21 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Jan 31 05:00:21 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Jan 31 05:00:21 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v132: 305 pgs: 4 peering, 301 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 255 B/s wr, 35 op/s; 0 B/s, 0 objects/s recovering
Jan 31 05:00:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:00:22 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Jan 31 05:00:22 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Jan 31 05:00:22 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Jan 31 05:00:22 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Jan 31 05:00:23 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v133: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 255 B/s wr, 35 op/s; 80 B/s, 1 keys/s, 1 objects/s recovering
Jan 31 05:00:23 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0)
Jan 31 05:00:23 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 31 05:00:23 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Jan 31 05:00:23 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 31 05:00:23 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Jan 31 05:00:23 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Jan 31 05:00:23 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Jan 31 05:00:23 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 31 05:00:23 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 31 05:00:23 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 31 05:00:23 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 31 05:00:23 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Jan 31 05:00:23 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Jan 31 05:00:24 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 63 pg[6.4( v 34'39 (0'0,34'39] local-lis/les=45/46 n=2 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=63 pruub=14.457561493s) [1] r=-1 lpr=63 pi=[45,63)/1 crt=34'39 lcod 0'0 active pruub 134.956298828s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:24 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 63 pg[6.4( v 34'39 (0'0,34'39] local-lis/les=45/46 n=2 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=63 pruub=14.457310677s) [1] r=-1 lpr=63 pi=[45,63)/1 crt=34'39 lcod 0'0 unknown NOTIFY pruub 134.956298828s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:24 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 63 pg[6.c( v 34'39 (0'0,34'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=63 pruub=14.460658073s) [1] r=-1 lpr=63 pi=[45,63)/1 crt=34'39 lcod 0'0 active pruub 134.960174561s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:24 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 63 pg[6.c( v 34'39 (0'0,34'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=63 pruub=14.460502625s) [1] r=-1 lpr=63 pi=[45,63)/1 crt=34'39 lcod 0'0 unknown NOTIFY pruub 134.960174561s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:24 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 63 pg[6.4( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=63) [1] r=0 lpr=63 pi=[45,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:24 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 63 pg[6.c( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=63) [1] r=0 lpr=63 pi=[45,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:24 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Jan 31 05:00:24 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 31 05:00:24 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 31 05:00:24 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Jan 31 05:00:24 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Jan 31 05:00:24 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 64 pg[6.4( v 34'39 lc 32'15 (0'0,34'39] local-lis/les=63/64 n=2 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=63) [1] r=0 lpr=63 pi=[45,63)/1 crt=34'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:24 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 64 pg[6.c( v 34'39 lc 32'17 (0'0,34'39] local-lis/les=63/64 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=63) [1] r=0 lpr=63 pi=[45,63)/1 crt=34'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:25 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v136: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 op/s; 101 B/s, 1 keys/s, 1 objects/s recovering
Jan 31 05:00:25 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0)
Jan 31 05:00:25 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 31 05:00:25 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Jan 31 05:00:25 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 31 05:00:25 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Jan 31 05:00:25 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Jan 31 05:00:25 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Jan 31 05:00:25 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 31 05:00:25 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 31 05:00:25 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Jan 31 05:00:25 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Jan 31 05:00:25 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 31 05:00:25 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 31 05:00:26 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 4.c scrub starts
Jan 31 05:00:26 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 4.c scrub ok
Jan 31 05:00:26 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:00:26 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 31 05:00:26 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 31 05:00:27 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v138: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 378 B/s, 1 keys/s, 1 objects/s recovering
Jan 31 05:00:27 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0)
Jan 31 05:00:27 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 31 05:00:27 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Jan 31 05:00:27 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 31 05:00:27 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Jan 31 05:00:27 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 65 pg[6.d( v 34'39 (0'0,34'39] local-lis/les=53/54 n=1 ec=45/20 lis/c=53/53 les/c/f=54/55/0 sis=65 pruub=15.288327217s) [0] r=-1 lpr=65 pi=[53,65)/1 crt=34'39 active pruub 132.231811523s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:27 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 65 pg[6.d( v 34'39 (0'0,34'39] local-lis/les=53/54 n=1 ec=45/20 lis/c=53/53 les/c/f=54/55/0 sis=65 pruub=15.288286209s) [0] r=-1 lpr=65 pi=[53,65)/1 crt=34'39 unknown NOTIFY pruub 132.231811523s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:27 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 65 pg[6.5( v 34'39 (0'0,34'39] local-lis/les=53/54 n=2 ec=45/20 lis/c=53/53 les/c/f=54/54/0 sis=65 pruub=15.287468910s) [0] r=-1 lpr=65 pi=[53,65)/1 crt=34'39 active pruub 132.231399536s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:27 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 65 pg[6.5( v 34'39 (0'0,34'39] local-lis/les=53/54 n=2 ec=45/20 lis/c=53/53 les/c/f=54/54/0 sis=65 pruub=15.287426949s) [0] r=-1 lpr=65 pi=[53,65)/1 crt=34'39 unknown NOTIFY pruub 132.231399536s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:27 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 65 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=53/53 les/c/f=54/55/0 sis=65) [0] r=0 lpr=65 pi=[53,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:27 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 65 pg[6.5( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=53/53 les/c/f=54/54/0 sis=65) [0] r=0 lpr=65 pi=[53,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:27 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Jan 31 05:00:27 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Jan 31 05:00:28 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 31 05:00:28 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 31 05:00:28 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 31 05:00:28 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 31 05:00:28 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Jan 31 05:00:28 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Jan 31 05:00:28 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 66 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=66 pruub=14.745253563s) [2] r=-1 lpr=66 pi=[49,66)/1 crt=39'483 lcod 0'0 active pruub 132.976043701s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:28 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 66 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=66 pruub=14.745195389s) [2] r=-1 lpr=66 pi=[49,66)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 132.976043701s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:28 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 66 pg[9.e( v 60'489 (0'0,60'489] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=66 pruub=14.745430946s) [2] r=-1 lpr=66 pi=[49,66)/1 crt=60'488 lcod 60'488 active pruub 132.976364136s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:28 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 66 pg[9.e( v 60'489 (0'0,60'489] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=66 pruub=14.745392799s) [2] r=-1 lpr=66 pi=[49,66)/1 crt=60'488 lcod 60'488 unknown NOTIFY pruub 132.976364136s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:28 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 66 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=66 pruub=14.745262146s) [2] r=-1 lpr=66 pi=[49,66)/1 crt=39'483 lcod 0'0 active pruub 132.976379395s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:28 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 66 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=66 pruub=14.745123863s) [2] r=-1 lpr=66 pi=[49,66)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 132.976379395s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:28 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 66 pg[9.1e( v 60'485 (0'0,60'485] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=66 pruub=14.745013237s) [2] r=-1 lpr=66 pi=[49,66)/1 crt=60'484 lcod 60'484 active pruub 132.976547241s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:28 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 66 pg[9.1e( v 60'485 (0'0,60'485] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=66 pruub=14.744990349s) [2] r=-1 lpr=66 pi=[49,66)/1 crt=60'484 lcod 60'484 unknown NOTIFY pruub 132.976547241s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:28 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 66 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:28 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 66 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:28 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 66 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:28 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 66 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=66) [2] r=0 lpr=66 pi=[49,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:28 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 66 pg[6.d( v 34'39 lc 32'13 (0'0,34'39] local-lis/les=65/66 n=1 ec=45/20 lis/c=53/53 les/c/f=54/55/0 sis=65) [0] r=0 lpr=65 pi=[53,65)/1 crt=34'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:28 np0005603787 systemd-logind[786]: New session 34 of user zuul.
Jan 31 05:00:28 np0005603787 systemd[1]: Started Session 34 of User zuul.
Jan 31 05:00:28 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 66 pg[6.5( v 34'39 lc 32'9 (0'0,34'39] local-lis/les=65/66 n=2 ec=45/20 lis/c=53/53 les/c/f=54/54/0 sis=65) [0] r=0 lpr=65 pi=[53,65)/1 crt=34'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:28 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Jan 31 05:00:28 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Jan 31 05:00:29 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 31 05:00:29 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 31 05:00:29 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v140: 305 pgs: 4 unknown, 2 peering, 299 active+clean; 460 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 309 B/s, 0 objects/s recovering
Jan 31 05:00:29 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Jan 31 05:00:29 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Jan 31 05:00:29 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Jan 31 05:00:29 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Jan 31 05:00:29 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Jan 31 05:00:29 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 67 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:29 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 67 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:29 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 67 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:29 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 67 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:29 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 67 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:29 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 67 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:29 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 67 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:29 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 67 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[49,67)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:29 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 67 pg[9.e( v 60'489 (0'0,60'489] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=0 lpr=67 pi=[49,67)/1 crt=60'488 lcod 60'488 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:29 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 67 pg[9.e( v 60'489 (0'0,60'489] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=0 lpr=67 pi=[49,67)/1 crt=60'488 lcod 60'488 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:29 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=0 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:29 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=0 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:29 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 67 pg[9.1e( v 60'485 (0'0,60'485] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=0 lpr=67 pi=[49,67)/1 crt=60'484 lcod 60'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:29 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=0 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:29 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=0 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:29 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 67 pg[9.1e( v 60'485 (0'0,60'485] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] r=0 lpr=67 pi=[49,67)/1 crt=60'484 lcod 60'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:29 np0005603787 python3.9[98551]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:00:30 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Jan 31 05:00:30 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Jan 31 05:00:30 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Jan 31 05:00:30 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Jan 31 05:00:30 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Jan 31 05:00:30 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:30 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 68 pg[9.1e( v 60'485 (0'0,60'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=60'485 lcod 60'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:30 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 68 pg[9.e( v 60'489 (0'0,60'489] local-lis/les=67/68 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=60'489 lcod 60'488 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:30 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:30 np0005603787 python3.9[98769]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:00:30 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Jan 31 05:00:30 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Jan 31 05:00:31 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Jan 31 05:00:31 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Jan 31 05:00:31 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v143: 305 pgs: 3 active+recovery_wait+remapped, 1 active+recovering+remapped, 2 peering, 299 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 23/253 objects misplaced (9.091%); 0 B/s, 0 objects/s recovering
Jan 31 05:00:31 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:00:31 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Jan 31 05:00:31 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Jan 31 05:00:31 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Jan 31 05:00:31 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 69 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=69 pruub=14.839122772s) [2] async=[2] r=-1 lpr=69 pi=[49,69)/1 crt=39'483 lcod 0'0 active pruub 136.150985718s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:31 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 69 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=69 pruub=14.838802338s) [2] r=-1 lpr=69 pi=[49,69)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 136.150985718s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:31 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 69 pg[9.e( v 60'489 (0'0,60'489] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=69 pruub=14.838608742s) [2] async=[2] r=-1 lpr=69 pi=[49,69)/1 crt=60'489 lcod 60'488 active pruub 136.151046753s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:31 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 69 pg[9.e( v 60'489 (0'0,60'489] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=69 pruub=14.838486671s) [2] r=-1 lpr=69 pi=[49,69)/1 crt=60'489 lcod 60'488 unknown NOTIFY pruub 136.151046753s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:31 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 69 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=69 pruub=14.832369804s) [2] async=[2] r=-1 lpr=69 pi=[49,69)/1 crt=39'483 lcod 0'0 active pruub 136.144927979s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:31 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 69 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=69 pruub=14.832124710s) [2] r=-1 lpr=69 pi=[49,69)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 136.144927979s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:31 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 69 pg[9.1e( v 60'485 (0'0,60'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=69 pruub=14.837959290s) [2] async=[2] r=-1 lpr=69 pi=[49,69)/1 crt=60'485 lcod 60'484 active pruub 136.150985718s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:31 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 69 pg[9.1e( v 60'485 (0'0,60'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=69 pruub=14.837290764s) [2] r=-1 lpr=69 pi=[49,69)/1 crt=60'485 lcod 60'484 unknown NOTIFY pruub 136.150985718s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:31 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Jan 31 05:00:31 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 69 pg[9.1e( v 60'485 (0'0,60'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 pct=0'0 crt=60'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:31 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 69 pg[9.1e( v 60'485 (0'0,60'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=60'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:31 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 69 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:31 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 69 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:31 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 69 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:31 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 69 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:31 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 69 pg[9.e( v 60'489 (0'0,60'489] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 pct=0'0 crt=60'489 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:31 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 69 pg[9.e( v 60'489 (0'0,60'489] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=60'489 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:31 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Jan 31 05:00:32 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Jan 31 05:00:32 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Jan 31 05:00:32 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Jan 31 05:00:32 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 70 pg[9.1e( v 60'485 (0'0,60'485] local-lis/les=69/70 n=6 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=60'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:32 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 70 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:32 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 70 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=69/70 n=7 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:32 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 70 pg[9.e( v 60'489 (0'0,60'489] local-lis/les=69/70 n=7 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=69) [2] r=0 lpr=69 pi=[49,69)/1 crt=60'489 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:32 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Jan 31 05:00:32 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Jan 31 05:00:33 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v146: 305 pgs: 3 active+recovery_wait+remapped, 1 active+recovering+remapped, 301 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.0 KiB/s wr, 52 op/s; 23/253 objects misplaced (9.091%); 39 B/s, 1 objects/s recovering
Jan 31 05:00:33 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Jan 31 05:00:33 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Jan 31 05:00:33 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:00:33 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:00:33 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:00:33 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:00:33 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:00:33 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:00:33 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:00:33 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:00:33 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:00:33 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:00:33 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:00:33 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:00:33 np0005603787 podman[98926]: 2026-01-31 10:00:33.628780067 +0000 UTC m=+0.017556149 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:00:33 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:00:33 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:00:33 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:00:33 np0005603787 podman[98926]: 2026-01-31 10:00:33.760482271 +0000 UTC m=+0.149258333 container create ba57a8cccebb9403be376d299fbcee0fcdca013a40334d4fe6e2d036d32b10a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_solomon, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:00:33 np0005603787 systemd[1]: Started libpod-conmon-ba57a8cccebb9403be376d299fbcee0fcdca013a40334d4fe6e2d036d32b10a7.scope.
Jan 31 05:00:33 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:00:33 np0005603787 podman[98926]: 2026-01-31 10:00:33.973778582 +0000 UTC m=+0.362554654 container init ba57a8cccebb9403be376d299fbcee0fcdca013a40334d4fe6e2d036d32b10a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_solomon, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:00:33 np0005603787 podman[98926]: 2026-01-31 10:00:33.977835402 +0000 UTC m=+0.366611454 container start ba57a8cccebb9403be376d299fbcee0fcdca013a40334d4fe6e2d036d32b10a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_solomon, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 05:00:33 np0005603787 festive_solomon[98943]: 167 167
Jan 31 05:00:33 np0005603787 systemd[1]: libpod-ba57a8cccebb9403be376d299fbcee0fcdca013a40334d4fe6e2d036d32b10a7.scope: Deactivated successfully.
Jan 31 05:00:34 np0005603787 podman[98926]: 2026-01-31 10:00:34.009984709 +0000 UTC m=+0.398760761 container attach ba57a8cccebb9403be376d299fbcee0fcdca013a40334d4fe6e2d036d32b10a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:00:34 np0005603787 podman[98926]: 2026-01-31 10:00:34.010849693 +0000 UTC m=+0.399625765 container died ba57a8cccebb9403be376d299fbcee0fcdca013a40334d4fe6e2d036d32b10a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_solomon, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 05:00:34 np0005603787 systemd[1]: var-lib-containers-storage-overlay-ff81570e795581d49d957441e3d28a8c14d1542a9bd8210abd7276f195ac3207-merged.mount: Deactivated successfully.
Jan 31 05:00:35 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Jan 31 05:00:35 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Jan 31 05:00:35 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v147: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.4 KiB/s wr, 36 op/s; 172 B/s, 4 objects/s recovering
Jan 31 05:00:35 np0005603787 podman[98926]: 2026-01-31 10:00:35.282462242 +0000 UTC m=+1.671238294 container remove ba57a8cccebb9403be376d299fbcee0fcdca013a40334d4fe6e2d036d32b10a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 05:00:35 np0005603787 systemd[1]: libpod-conmon-ba57a8cccebb9403be376d299fbcee0fcdca013a40334d4fe6e2d036d32b10a7.scope: Deactivated successfully.
Jan 31 05:00:35 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0)
Jan 31 05:00:35 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 31 05:00:35 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Jan 31 05:00:35 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 31 05:00:35 np0005603787 podman[98977]: 2026-01-31 10:00:35.36851105 +0000 UTC m=+0.018630640 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:00:35 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Jan 31 05:00:35 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Jan 31 05:00:35 np0005603787 podman[98977]: 2026-01-31 10:00:35.544598245 +0000 UTC m=+0.194717805 container create 6d20c6f26a16a5275a7b180a2a80dfc58e8c3167febcf5cc1e9ca65c59b56773 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_diffie, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:00:35 np0005603787 systemd[1]: Started libpod-conmon-6d20c6f26a16a5275a7b180a2a80dfc58e8c3167febcf5cc1e9ca65c59b56773.scope.
Jan 31 05:00:35 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:00:35 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d9fb3f274e5b9a1043470255c85161cfa45897ba187fc2c69fdf962ee534c7a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:00:35 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d9fb3f274e5b9a1043470255c85161cfa45897ba187fc2c69fdf962ee534c7a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:00:35 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d9fb3f274e5b9a1043470255c85161cfa45897ba187fc2c69fdf962ee534c7a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:00:35 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d9fb3f274e5b9a1043470255c85161cfa45897ba187fc2c69fdf962ee534c7a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:00:35 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d9fb3f274e5b9a1043470255c85161cfa45897ba187fc2c69fdf962ee534c7a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:00:35 np0005603787 podman[98977]: 2026-01-31 10:00:35.683824464 +0000 UTC m=+0.333944134 container init 6d20c6f26a16a5275a7b180a2a80dfc58e8c3167febcf5cc1e9ca65c59b56773 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_diffie, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 05:00:35 np0005603787 podman[98977]: 2026-01-31 10:00:35.68991372 +0000 UTC m=+0.340033320 container start 6d20c6f26a16a5275a7b180a2a80dfc58e8c3167febcf5cc1e9ca65c59b56773 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:00:35 np0005603787 podman[98977]: 2026-01-31 10:00:35.794115074 +0000 UTC m=+0.444234664 container attach 6d20c6f26a16a5275a7b180a2a80dfc58e8c3167febcf5cc1e9ca65c59b56773 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_diffie, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 05:00:36 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Jan 31 05:00:36 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Jan 31 05:00:36 np0005603787 lucid_diffie[98993]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:00:36 np0005603787 lucid_diffie[98993]: --> All data devices are unavailable
Jan 31 05:00:36 np0005603787 systemd[1]: libpod-6d20c6f26a16a5275a7b180a2a80dfc58e8c3167febcf5cc1e9ca65c59b56773.scope: Deactivated successfully.
Jan 31 05:00:36 np0005603787 podman[98977]: 2026-01-31 10:00:36.063306499 +0000 UTC m=+0.713426059 container died 6d20c6f26a16a5275a7b180a2a80dfc58e8c3167febcf5cc1e9ca65c59b56773 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 05:00:36 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Jan 31 05:00:36 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 31 05:00:36 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 31 05:00:36 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 31 05:00:36 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 31 05:00:36 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Jan 31 05:00:36 np0005603787 systemd[1]: var-lib-containers-storage-overlay-9d9fb3f274e5b9a1043470255c85161cfa45897ba187fc2c69fdf962ee534c7a-merged.mount: Deactivated successfully.
Jan 31 05:00:36 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Jan 31 05:00:36 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:00:36 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 71 pg[9.7( v 60'487 (0'0,60'487] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=71 pruub=10.120306969s) [2] r=-1 lpr=71 pi=[57,71)/1 crt=60'486 lcod 60'486 active pruub 142.497238159s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:36 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 71 pg[9.17( v 60'485 (0'0,60'485] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=71 pruub=10.120264053s) [2] r=-1 lpr=71 pi=[57,71)/1 crt=60'484 lcod 60'484 active pruub 142.497268677s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:36 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 71 pg[9.7( v 60'487 (0'0,60'487] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=71 pruub=10.120223045s) [2] r=-1 lpr=71 pi=[57,71)/1 crt=60'486 lcod 60'486 unknown NOTIFY pruub 142.497238159s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:36 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 71 pg[9.17( v 60'485 (0'0,60'485] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=71 pruub=10.120174408s) [2] r=-1 lpr=71 pi=[57,71)/1 crt=60'484 lcod 60'484 unknown NOTIFY pruub 142.497268677s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:36 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 71 pg[9.f( v 60'485 (0'0,60'485] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=71 pruub=10.119600296s) [2] r=-1 lpr=71 pi=[57,71)/1 crt=60'484 lcod 60'484 active pruub 142.497283936s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:36 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 71 pg[9.f( v 60'485 (0'0,60'485] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=71 pruub=10.119569778s) [2] r=-1 lpr=71 pi=[57,71)/1 crt=60'484 lcod 60'484 unknown NOTIFY pruub 142.497283936s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:36 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 71 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=71 pruub=9.105570793s) [2] r=-1 lpr=71 pi=[56,71)/1 crt=39'483 active pruub 141.483978271s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:36 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 71 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=71 pruub=9.105430603s) [2] r=-1 lpr=71 pi=[56,71)/1 crt=39'483 unknown NOTIFY pruub 141.483978271s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:36 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 71 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=71) [2] r=0 lpr=71 pi=[56,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:36 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 71 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=71) [2] r=0 lpr=71 pi=[57,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:36 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 71 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=71) [2] r=0 lpr=71 pi=[57,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:36 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 71 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=71) [2] r=0 lpr=71 pi=[57,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:36 np0005603787 podman[98977]: 2026-01-31 10:00:36.539527914 +0000 UTC m=+1.189647474 container remove 6d20c6f26a16a5275a7b180a2a80dfc58e8c3167febcf5cc1e9ca65c59b56773 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_diffie, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 05:00:36 np0005603787 systemd[1]: libpod-conmon-6d20c6f26a16a5275a7b180a2a80dfc58e8c3167febcf5cc1e9ca65c59b56773.scope: Deactivated successfully.
Jan 31 05:00:36 np0005603787 podman[99086]: 2026-01-31 10:00:36.963514594 +0000 UTC m=+0.061037927 container create a902cfbb790ed958a449bcee2dc57ade681e4498b1ee3c8a77b6f85b11e52c54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_lovelace, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:00:37 np0005603787 podman[99086]: 2026-01-31 10:00:36.922786212 +0000 UTC m=+0.020309565 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:00:37 np0005603787 systemd[1]: Started libpod-conmon-a902cfbb790ed958a449bcee2dc57ade681e4498b1ee3c8a77b6f85b11e52c54.scope.
Jan 31 05:00:37 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:00:37 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v149: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 164 B/s, 4 objects/s recovering
Jan 31 05:00:37 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0)
Jan 31 05:00:37 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 31 05:00:37 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Jan 31 05:00:37 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 31 05:00:37 np0005603787 podman[99086]: 2026-01-31 10:00:37.085938334 +0000 UTC m=+0.183461687 container init a902cfbb790ed958a449bcee2dc57ade681e4498b1ee3c8a77b6f85b11e52c54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_lovelace, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:00:37 np0005603787 podman[99086]: 2026-01-31 10:00:37.09275403 +0000 UTC m=+0.190277373 container start a902cfbb790ed958a449bcee2dc57ade681e4498b1ee3c8a77b6f85b11e52c54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:00:37 np0005603787 keen_lovelace[99102]: 167 167
Jan 31 05:00:37 np0005603787 systemd[1]: libpod-a902cfbb790ed958a449bcee2dc57ade681e4498b1ee3c8a77b6f85b11e52c54.scope: Deactivated successfully.
Jan 31 05:00:37 np0005603787 conmon[99102]: conmon a902cfbb790ed958a449 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a902cfbb790ed958a449bcee2dc57ade681e4498b1ee3c8a77b6f85b11e52c54.scope/container/memory.events
Jan 31 05:00:37 np0005603787 podman[99086]: 2026-01-31 10:00:37.114920505 +0000 UTC m=+0.212443838 container attach a902cfbb790ed958a449bcee2dc57ade681e4498b1ee3c8a77b6f85b11e52c54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 05:00:37 np0005603787 podman[99086]: 2026-01-31 10:00:37.116056196 +0000 UTC m=+0.213579539 container died a902cfbb790ed958a449bcee2dc57ade681e4498b1ee3c8a77b6f85b11e52c54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 05:00:37 np0005603787 systemd[1]: var-lib-containers-storage-overlay-a45d62faed74fd42f89b8b3cfcdc999ec453cd7ef46afd43abd49f1e56e23bef-merged.mount: Deactivated successfully.
Jan 31 05:00:37 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Jan 31 05:00:37 np0005603787 podman[99086]: 2026-01-31 10:00:37.377425247 +0000 UTC m=+0.474948580 container remove a902cfbb790ed958a449bcee2dc57ade681e4498b1ee3c8a77b6f85b11e52c54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_lovelace, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 05:00:37 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 31 05:00:37 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 31 05:00:37 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 31 05:00:37 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 31 05:00:37 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 31 05:00:37 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 31 05:00:37 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Jan 31 05:00:37 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Jan 31 05:00:37 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 72 pg[9.18( v 60'487 (0'0,60'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72 pruub=13.756978989s) [2] r=-1 lpr=72 pi=[49,72)/1 crt=60'486 lcod 60'486 active pruub 140.976638794s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:37 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72 pruub=13.756620407s) [2] r=-1 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 active pruub 140.976455688s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:37 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72 pruub=13.756524086s) [2] r=-1 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 140.976455688s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:37 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 72 pg[9.18( v 60'487 (0'0,60'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72 pruub=13.756746292s) [2] r=-1 lpr=72 pi=[49,72)/1 crt=60'486 lcod 60'486 unknown NOTIFY pruub 140.976638794s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:37 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 72 pg[9.7( v 60'487 (0'0,60'487] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=72) [2]/[0] r=0 lpr=72 pi=[57,72)/1 crt=60'486 lcod 60'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:37 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 72 pg[9.7( v 60'487 (0'0,60'487] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=72) [2]/[0] r=0 lpr=72 pi=[57,72)/1 crt=60'486 lcod 60'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:37 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 72 pg[9.17( v 60'485 (0'0,60'485] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=72) [2]/[0] r=0 lpr=72 pi=[57,72)/1 crt=60'484 lcod 60'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:37 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 72 pg[9.17( v 60'485 (0'0,60'485] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=72) [2]/[0] r=0 lpr=72 pi=[57,72)/1 crt=60'484 lcod 60'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:37 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 72 pg[9.f( v 60'485 (0'0,60'485] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=72) [2]/[0] r=0 lpr=72 pi=[57,72)/1 crt=60'484 lcod 60'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:37 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 72 pg[9.f( v 60'485 (0'0,60'485] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=72) [2]/[0] r=0 lpr=72 pi=[57,72)/1 crt=60'484 lcod 60'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:37 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 72 pg[6.8( v 34'39 (0'0,34'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=72 pruub=9.539555550s) [2] r=-1 lpr=72 pi=[45,72)/1 crt=34'39 lcod 0'0 active pruub 142.959640503s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:37 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 72 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=72) [2]/[0] r=0 lpr=72 pi=[56,72)/1 crt=39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:37 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 72 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=72) [2]/[0] r=0 lpr=72 pi=[56,72)/1 crt=39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:37 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 72 pg[6.8( v 34'39 (0'0,34'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=72 pruub=9.539452553s) [2] r=-1 lpr=72 pi=[45,72)/1 crt=34'39 lcod 0'0 unknown NOTIFY pruub 142.959640503s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:37 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 72 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2] r=0 lpr=72 pi=[49,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:37 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 72 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=72) [2]/[0] r=-1 lpr=72 pi=[57,72)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:37 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 72 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=72) [2]/[0] r=-1 lpr=72 pi=[56,72)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:37 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 72 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=72) [2]/[0] r=-1 lpr=72 pi=[56,72)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:37 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 72 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=72) [2]/[0] r=-1 lpr=72 pi=[57,72)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:37 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 72 pg[6.8( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=0 lpr=72 pi=[45,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:37 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 72 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=72) [2]/[0] r=-1 lpr=72 pi=[57,72)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:37 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 72 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=72) [2]/[0] r=-1 lpr=72 pi=[57,72)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:37 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 72 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2] r=0 lpr=72 pi=[49,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:37 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 72 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=72) [2]/[0] r=-1 lpr=72 pi=[57,72)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:37 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 72 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=72) [2]/[0] r=-1 lpr=72 pi=[57,72)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:37 np0005603787 systemd[1]: libpod-conmon-a902cfbb790ed958a449bcee2dc57ade681e4498b1ee3c8a77b6f85b11e52c54.scope: Deactivated successfully.
Jan 31 05:00:37 np0005603787 podman[99126]: 2026-01-31 10:00:37.503350204 +0000 UTC m=+0.051978850 container create 9882851c2c4eab5b9d84cb9a9b52874c4f723e8c9b6a3afa8076bd55749d7d0a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 05:00:37 np0005603787 podman[99126]: 2026-01-31 10:00:37.472602305 +0000 UTC m=+0.021230981 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:00:37 np0005603787 systemd[1]: Started libpod-conmon-9882851c2c4eab5b9d84cb9a9b52874c4f723e8c9b6a3afa8076bd55749d7d0a.scope.
Jan 31 05:00:37 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:00:37 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/662dcc200448c8566e7789ecd2a3422295ed9946b393db568a7fd42ff0a0a604/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:00:37 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/662dcc200448c8566e7789ecd2a3422295ed9946b393db568a7fd42ff0a0a604/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:00:37 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/662dcc200448c8566e7789ecd2a3422295ed9946b393db568a7fd42ff0a0a604/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:00:37 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/662dcc200448c8566e7789ecd2a3422295ed9946b393db568a7fd42ff0a0a604/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:00:37 np0005603787 podman[99126]: 2026-01-31 10:00:37.686801129 +0000 UTC m=+0.235429855 container init 9882851c2c4eab5b9d84cb9a9b52874c4f723e8c9b6a3afa8076bd55749d7d0a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:00:37 np0005603787 podman[99126]: 2026-01-31 10:00:37.694838689 +0000 UTC m=+0.243467435 container start 9882851c2c4eab5b9d84cb9a9b52874c4f723e8c9b6a3afa8076bd55749d7d0a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 05:00:37 np0005603787 podman[99126]: 2026-01-31 10:00:37.705058158 +0000 UTC m=+0.253686834 container attach 9882851c2c4eab5b9d84cb9a9b52874c4f723e8c9b6a3afa8076bd55749d7d0a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_blackburn, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]: {
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:    "0": [
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:        {
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:            "devices": [
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "/dev/loop3"
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:            ],
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:            "lv_name": "ceph_lv0",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:            "lv_size": "21470642176",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:            "name": "ceph_lv0",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:            "tags": {
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "ceph.cluster_name": "ceph",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "ceph.crush_device_class": "",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "ceph.encrypted": "0",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "ceph.objectstore": "bluestore",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "ceph.osd_id": "0",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "ceph.type": "block",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "ceph.vdo": "0",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "ceph.with_tpm": "0"
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:            },
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:            "type": "block",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:            "vg_name": "ceph_vg0"
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:        }
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:    ],
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:    "1": [
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:        {
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:            "devices": [
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "/dev/loop4"
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:            ],
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:            "lv_name": "ceph_lv1",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:            "lv_size": "21470642176",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:            "name": "ceph_lv1",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:            "tags": {
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "ceph.cluster_name": "ceph",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "ceph.crush_device_class": "",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "ceph.encrypted": "0",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "ceph.objectstore": "bluestore",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "ceph.osd_id": "1",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "ceph.type": "block",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "ceph.vdo": "0",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "ceph.with_tpm": "0"
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:            },
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:            "type": "block",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:            "vg_name": "ceph_vg1"
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:        }
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:    ],
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:    "2": [
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:        {
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:            "devices": [
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "/dev/loop5"
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:            ],
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:            "lv_name": "ceph_lv2",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:            "lv_size": "21470642176",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:            "name": "ceph_lv2",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:            "tags": {
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "ceph.cluster_name": "ceph",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "ceph.crush_device_class": "",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "ceph.encrypted": "0",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "ceph.objectstore": "bluestore",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "ceph.osd_id": "2",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "ceph.type": "block",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "ceph.vdo": "0",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:                "ceph.with_tpm": "0"
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:            },
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:            "type": "block",
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:            "vg_name": "ceph_vg2"
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:        }
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]:    ]
Jan 31 05:00:37 np0005603787 wizardly_blackburn[99142]: }
Jan 31 05:00:37 np0005603787 systemd[1]: libpod-9882851c2c4eab5b9d84cb9a9b52874c4f723e8c9b6a3afa8076bd55749d7d0a.scope: Deactivated successfully.
Jan 31 05:00:37 np0005603787 podman[99126]: 2026-01-31 10:00:37.972098385 +0000 UTC m=+0.520727041 container died 9882851c2c4eab5b9d84cb9a9b52874c4f723e8c9b6a3afa8076bd55749d7d0a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_blackburn, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:00:38 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Jan 31 05:00:38 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Jan 31 05:00:38 np0005603787 systemd[1]: var-lib-containers-storage-overlay-662dcc200448c8566e7789ecd2a3422295ed9946b393db568a7fd42ff0a0a604-merged.mount: Deactivated successfully.
Jan 31 05:00:38 np0005603787 podman[99126]: 2026-01-31 10:00:38.402204921 +0000 UTC m=+0.950833577 container remove 9882851c2c4eab5b9d84cb9a9b52874c4f723e8c9b6a3afa8076bd55749d7d0a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_blackburn, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 05:00:38 np0005603787 systemd[1]: libpod-conmon-9882851c2c4eab5b9d84cb9a9b52874c4f723e8c9b6a3afa8076bd55749d7d0a.scope: Deactivated successfully.
Jan 31 05:00:38 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Jan 31 05:00:38 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Jan 31 05:00:38 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Jan 31 05:00:38 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 73 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[49,73)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:38 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 73 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[49,73)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:38 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 73 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[49,73)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:38 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 73 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[49,73)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:38 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 73 pg[6.8( v 34'39 (0'0,34'39] local-lis/les=72/73 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=0 lpr=72 pi=[45,72)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:38 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 73 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=73) [2]/[1] r=0 lpr=73 pi=[49,73)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:38 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 73 pg[9.18( v 60'487 (0'0,60'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=73) [2]/[1] r=0 lpr=73 pi=[49,73)/1 crt=60'486 lcod 60'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:38 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 73 pg[9.18( v 60'487 (0'0,60'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=73) [2]/[1] r=0 lpr=73 pi=[49,73)/1 crt=60'486 lcod 60'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:38 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 73 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=73) [2]/[1] r=0 lpr=73 pi=[49,73)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:38 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 31 05:00:38 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 31 05:00:38 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 73 pg[9.f( v 60'485 (0'0,60'485] local-lis/les=72/73 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=72) [2]/[0] async=[2] r=0 lpr=72 pi=[57,72)/1 crt=60'485 lcod 60'484 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:38 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 73 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=72/73 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=72) [2]/[0] async=[2] r=0 lpr=72 pi=[56,72)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:38 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 73 pg[9.7( v 60'487 (0'0,60'487] local-lis/les=72/73 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=72) [2]/[0] async=[2] r=0 lpr=72 pi=[57,72)/1 crt=60'487 lcod 60'486 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:38 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 73 pg[9.17( v 60'485 (0'0,60'485] local-lis/les=72/73 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=72) [2]/[0] async=[2] r=0 lpr=72 pi=[57,72)/1 crt=60'485 lcod 60'484 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:38 np0005603787 podman[99236]: 2026-01-31 10:00:38.820892576 +0000 UTC m=+0.043320303 container create 4686b1110629fe7bcfcb3381cf2b7240fa8911dee41c94541fd552a3625c1275 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_hermann, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 05:00:38 np0005603787 systemd[1]: Started libpod-conmon-4686b1110629fe7bcfcb3381cf2b7240fa8911dee41c94541fd552a3625c1275.scope.
Jan 31 05:00:38 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:00:38 np0005603787 podman[99236]: 2026-01-31 10:00:38.796369727 +0000 UTC m=+0.018797474 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:00:38 np0005603787 podman[99236]: 2026-01-31 10:00:38.906488172 +0000 UTC m=+0.128915919 container init 4686b1110629fe7bcfcb3381cf2b7240fa8911dee41c94541fd552a3625c1275 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_hermann, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:00:38 np0005603787 podman[99236]: 2026-01-31 10:00:38.950475352 +0000 UTC m=+0.172903079 container start 4686b1110629fe7bcfcb3381cf2b7240fa8911dee41c94541fd552a3625c1275 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 05:00:38 np0005603787 funny_hermann[99252]: 167 167
Jan 31 05:00:38 np0005603787 systemd[1]: libpod-4686b1110629fe7bcfcb3381cf2b7240fa8911dee41c94541fd552a3625c1275.scope: Deactivated successfully.
Jan 31 05:00:39 np0005603787 podman[99236]: 2026-01-31 10:00:39.042310179 +0000 UTC m=+0.264737956 container attach 4686b1110629fe7bcfcb3381cf2b7240fa8911dee41c94541fd552a3625c1275 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_hermann, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 05:00:39 np0005603787 podman[99236]: 2026-01-31 10:00:39.043178302 +0000 UTC m=+0.265606089 container died 4686b1110629fe7bcfcb3381cf2b7240fa8911dee41c94541fd552a3625c1275 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_hermann, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 05:00:39 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Jan 31 05:00:39 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Jan 31 05:00:39 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v152: 305 pgs: 4 remapped+peering, 301 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 138 B/s, 3 objects/s recovering
Jan 31 05:00:39 np0005603787 systemd[1]: var-lib-containers-storage-overlay-2aa75bfde7db497ad3fe6b0d431579216bde22d87be71cdc1214bd2471f13ede-merged.mount: Deactivated successfully.
Jan 31 05:00:39 np0005603787 podman[99236]: 2026-01-31 10:00:39.188880857 +0000 UTC m=+0.411308584 container remove 4686b1110629fe7bcfcb3381cf2b7240fa8911dee41c94541fd552a3625c1275 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_hermann, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:00:39 np0005603787 systemd[1]: libpod-conmon-4686b1110629fe7bcfcb3381cf2b7240fa8911dee41c94541fd552a3625c1275.scope: Deactivated successfully.
Jan 31 05:00:39 np0005603787 podman[99277]: 2026-01-31 10:00:39.315300088 +0000 UTC m=+0.040164788 container create 23becc311daf298012d4f1928212ccfcfb3caa09518fa062ad92c06d56cc4a98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_knuth, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 05:00:39 np0005603787 systemd[1]: Started libpod-conmon-23becc311daf298012d4f1928212ccfcfb3caa09518fa062ad92c06d56cc4a98.scope.
Jan 31 05:00:39 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:00:39 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90f36f57d80b0f1ed54a075c233b716a60ac4893a47aad67f1c8aba18f28bfd3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:00:39 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90f36f57d80b0f1ed54a075c233b716a60ac4893a47aad67f1c8aba18f28bfd3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:00:39 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90f36f57d80b0f1ed54a075c233b716a60ac4893a47aad67f1c8aba18f28bfd3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:00:39 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90f36f57d80b0f1ed54a075c233b716a60ac4893a47aad67f1c8aba18f28bfd3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:00:39 np0005603787 podman[99277]: 2026-01-31 10:00:39.387943029 +0000 UTC m=+0.112807779 container init 23becc311daf298012d4f1928212ccfcfb3caa09518fa062ad92c06d56cc4a98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_knuth, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle)
Jan 31 05:00:39 np0005603787 podman[99277]: 2026-01-31 10:00:39.29673522 +0000 UTC m=+0.021599950 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:00:39 np0005603787 podman[99277]: 2026-01-31 10:00:39.39310547 +0000 UTC m=+0.117970170 container start 23becc311daf298012d4f1928212ccfcfb3caa09518fa062ad92c06d56cc4a98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:00:39 np0005603787 podman[99277]: 2026-01-31 10:00:39.407351379 +0000 UTC m=+0.132216079 container attach 23becc311daf298012d4f1928212ccfcfb3caa09518fa062ad92c06d56cc4a98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Jan 31 05:00:39 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Jan 31 05:00:39 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Jan 31 05:00:39 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Jan 31 05:00:39 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 74 pg[9.7( v 60'487 (0'0,60'487] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/57 les/c/f=73/58/0 sis=74 pruub=15.014167786s) [2] async=[2] r=-1 lpr=74 pi=[57,74)/1 crt=60'487 lcod 60'486 active pruub 150.463592529s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:39 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 74 pg[9.17( v 60'485 (0'0,60'485] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/57 les/c/f=73/58/0 sis=74 pruub=15.014286995s) [2] async=[2] r=-1 lpr=74 pi=[57,74)/1 crt=60'485 lcod 60'484 active pruub 150.463790894s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:39 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 74 pg[9.17( v 60'485 (0'0,60'485] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/57 les/c/f=73/58/0 sis=74 pruub=15.014230728s) [2] r=-1 lpr=74 pi=[57,74)/1 crt=60'485 lcod 60'484 unknown NOTIFY pruub 150.463790894s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:39 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 74 pg[9.7( v 60'487 (0'0,60'487] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/57 les/c/f=73/58/0 sis=74 pruub=15.014034271s) [2] r=-1 lpr=74 pi=[57,74)/1 crt=60'487 lcod 60'486 unknown NOTIFY pruub 150.463592529s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:39 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 74 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/56 les/c/f=73/57/0 sis=74 pruub=15.013113022s) [2] async=[2] r=-1 lpr=74 pi=[56,74)/1 crt=39'483 active pruub 150.463577271s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:39 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 74 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/56 les/c/f=73/57/0 sis=74 pruub=15.013068199s) [2] r=-1 lpr=74 pi=[56,74)/1 crt=39'483 unknown NOTIFY pruub 150.463577271s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:39 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 74 pg[9.f( v 60'485 (0'0,60'485] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/57 les/c/f=73/58/0 sis=74 pruub=15.011574745s) [2] async=[2] r=-1 lpr=74 pi=[57,74)/1 crt=60'485 lcod 60'484 active pruub 150.462356567s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:39 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 74 pg[9.f( v 60'485 (0'0,60'485] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/57 les/c/f=73/58/0 sis=74 pruub=15.011392593s) [2] r=-1 lpr=74 pi=[57,74)/1 crt=60'485 lcod 60'484 unknown NOTIFY pruub 150.462356567s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:39 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 74 pg[9.f( v 60'485 (0'0,60'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/57 les/c/f=73/58/0 sis=74) [2] r=0 lpr=74 pi=[57,74)/1 pct=0'0 crt=60'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:39 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 74 pg[9.f( v 60'485 (0'0,60'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/57 les/c/f=73/58/0 sis=74) [2] r=0 lpr=74 pi=[57,74)/1 crt=60'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:39 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 74 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/56 les/c/f=73/57/0 sis=74) [2] r=0 lpr=74 pi=[56,74)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:39 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 74 pg[9.17( v 60'485 (0'0,60'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/57 les/c/f=73/58/0 sis=74) [2] r=0 lpr=74 pi=[57,74)/1 pct=0'0 crt=60'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:39 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 74 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/56 les/c/f=73/57/0 sis=74) [2] r=0 lpr=74 pi=[56,74)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:39 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 74 pg[9.7( v 60'487 (0'0,60'487] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/57 les/c/f=73/58/0 sis=74) [2] r=0 lpr=74 pi=[57,74)/1 pct=0'0 crt=60'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:39 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 74 pg[9.17( v 60'485 (0'0,60'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/57 les/c/f=73/58/0 sis=74) [2] r=0 lpr=74 pi=[57,74)/1 crt=60'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:39 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 74 pg[9.7( v 60'487 (0'0,60'487] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/57 les/c/f=73/58/0 sis=74) [2] r=0 lpr=74 pi=[57,74)/1 crt=60'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:39 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=73/74 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[49,73)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:39 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 74 pg[9.18( v 60'487 (0'0,60'487] local-lis/les=73/74 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[49,73)/1 crt=60'487 lcod 60'486 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:40 np0005603787 lvm[99371]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:00:40 np0005603787 lvm[99371]: VG ceph_vg0 finished
Jan 31 05:00:40 np0005603787 lvm[99372]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:00:40 np0005603787 lvm[99372]: VG ceph_vg1 finished
Jan 31 05:00:40 np0005603787 lvm[99374]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:00:40 np0005603787 lvm[99374]: VG ceph_vg2 finished
Jan 31 05:00:40 np0005603787 brave_knuth[99293]: {}
Jan 31 05:00:40 np0005603787 systemd[1]: libpod-23becc311daf298012d4f1928212ccfcfb3caa09518fa062ad92c06d56cc4a98.scope: Deactivated successfully.
Jan 31 05:00:40 np0005603787 podman[99277]: 2026-01-31 10:00:40.121746233 +0000 UTC m=+0.846610923 container died 23becc311daf298012d4f1928212ccfcfb3caa09518fa062ad92c06d56cc4a98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:00:40 np0005603787 systemd[1]: var-lib-containers-storage-overlay-90f36f57d80b0f1ed54a075c233b716a60ac4893a47aad67f1c8aba18f28bfd3-merged.mount: Deactivated successfully.
Jan 31 05:00:40 np0005603787 podman[99277]: 2026-01-31 10:00:40.28363022 +0000 UTC m=+1.008494920 container remove 23becc311daf298012d4f1928212ccfcfb3caa09518fa062ad92c06d56cc4a98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_knuth, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:00:40 np0005603787 systemd[1]: libpod-conmon-23becc311daf298012d4f1928212ccfcfb3caa09518fa062ad92c06d56cc4a98.scope: Deactivated successfully.
Jan 31 05:00:40 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:00:40 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:00:40 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:00:40 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:00:40 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Jan 31 05:00:40 np0005603787 systemd[1]: session-34.scope: Deactivated successfully.
Jan 31 05:00:40 np0005603787 systemd[1]: session-34.scope: Consumed 7.955s CPU time.
Jan 31 05:00:40 np0005603787 systemd-logind[786]: Session 34 logged out. Waiting for processes to exit.
Jan 31 05:00:40 np0005603787 systemd-logind[786]: Removed session 34.
Jan 31 05:00:40 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Jan 31 05:00:40 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Jan 31 05:00:40 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=73/49 les/c/f=74/50/0 sis=75) [2] r=0 lpr=75 pi=[49,75)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:40 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=73/49 les/c/f=74/50/0 sis=75) [2] r=0 lpr=75 pi=[49,75)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:40 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 75 pg[9.18( v 60'487 (0'0,60'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=73/49 les/c/f=74/50/0 sis=75) [2] r=0 lpr=75 pi=[49,75)/1 pct=0'0 crt=60'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:40 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 75 pg[9.18( v 60'487 (0'0,60'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=73/49 les/c/f=74/50/0 sis=75) [2] r=0 lpr=75 pi=[49,75)/1 crt=60'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:40 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=73/74 n=7 ec=49/33 lis/c=73/49 les/c/f=74/50/0 sis=75 pruub=15.001729965s) [2] async=[2] r=-1 lpr=75 pi=[49,75)/1 crt=39'483 lcod 0'0 active pruub 145.270111084s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:40 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=73/74 n=7 ec=49/33 lis/c=73/49 les/c/f=74/50/0 sis=75 pruub=15.001603127s) [2] r=-1 lpr=75 pi=[49,75)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 145.270111084s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:40 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 75 pg[9.18( v 60'487 (0'0,60'487] local-lis/les=73/74 n=6 ec=49/33 lis/c=73/49 les/c/f=74/50/0 sis=75 pruub=15.004298210s) [2] async=[2] r=-1 lpr=75 pi=[49,75)/1 crt=60'487 lcod 60'486 active pruub 145.272949219s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:40 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 75 pg[9.18( v 60'487 (0'0,60'487] local-lis/les=73/74 n=6 ec=49/33 lis/c=73/49 les/c/f=74/50/0 sis=75 pruub=15.004214287s) [2] r=-1 lpr=75 pi=[49,75)/1 crt=60'487 lcod 60'486 unknown NOTIFY pruub 145.272949219s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:40 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 75 pg[9.f( v 60'485 (0'0,60'485] local-lis/les=74/75 n=7 ec=49/33 lis/c=72/57 les/c/f=73/58/0 sis=74) [2] r=0 lpr=74 pi=[57,74)/1 crt=60'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:40 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 75 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=74/75 n=6 ec=49/33 lis/c=72/56 les/c/f=73/57/0 sis=74) [2] r=0 lpr=74 pi=[56,74)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:40 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 75 pg[9.17( v 60'485 (0'0,60'485] local-lis/les=74/75 n=6 ec=49/33 lis/c=72/57 les/c/f=73/58/0 sis=74) [2] r=0 lpr=74 pi=[57,74)/1 crt=60'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:40 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 75 pg[9.7( v 60'487 (0'0,60'487] local-lis/les=74/75 n=7 ec=49/33 lis/c=72/57 les/c/f=73/58/0 sis=74) [2] r=0 lpr=74 pi=[57,74)/1 crt=60'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:40 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:00:40 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:00:40 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Jan 31 05:00:40 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Jan 31 05:00:41 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v155: 305 pgs: 2 active+remapped, 4 remapped+peering, 299 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 131 B/s, 3 objects/s recovering
Jan 31 05:00:41 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Jan 31 05:00:41 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Jan 31 05:00:41 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:00:41 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Jan 31 05:00:41 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Jan 31 05:00:41 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Jan 31 05:00:41 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 76 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=75/76 n=7 ec=49/33 lis/c=73/49 les/c/f=74/50/0 sis=75) [2] r=0 lpr=75 pi=[49,75)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:41 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 76 pg[9.18( v 60'487 (0'0,60'487] local-lis/les=75/76 n=6 ec=49/33 lis/c=73/49 les/c/f=74/50/0 sis=75) [2] r=0 lpr=75 pi=[49,75)/1 crt=60'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:42 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Jan 31 05:00:42 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Jan 31 05:00:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_10:00:43
Jan 31 05:00:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:00:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Some PGs (0.013115) are inactive; try again later
Jan 31 05:00:43 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v157: 305 pgs: 2 active+remapped, 4 remapped+peering, 299 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 113 B/s, 2 objects/s recovering
Jan 31 05:00:43 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Jan 31 05:00:43 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Jan 31 05:00:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:00:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:00:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:00:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:00:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:00:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:00:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:00:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:00:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:00:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:00:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:00:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:00:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:00:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:00:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:00:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:00:44 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Jan 31 05:00:44 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Jan 31 05:00:45 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v158: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 281 B/s, 6 objects/s recovering
Jan 31 05:00:45 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0)
Jan 31 05:00:45 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 31 05:00:45 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Jan 31 05:00:45 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 31 05:00:45 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Jan 31 05:00:45 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 31 05:00:45 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 31 05:00:45 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Jan 31 05:00:45 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Jan 31 05:00:45 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 77 pg[6.9( v 34'39 (0'0,34'39] local-lis/les=53/54 n=1 ec=45/20 lis/c=53/53 les/c/f=54/54/0 sis=77 pruub=13.231389999s) [0] r=-1 lpr=77 pi=[53,77)/1 crt=34'39 lcod 0'0 active pruub 148.231918335s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:45 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 77 pg[6.9( v 34'39 (0'0,34'39] local-lis/les=53/54 n=1 ec=45/20 lis/c=53/53 les/c/f=54/54/0 sis=77 pruub=13.231346130s) [0] r=-1 lpr=77 pi=[53,77)/1 crt=34'39 lcod 0'0 unknown NOTIFY pruub 148.231918335s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:45 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Jan 31 05:00:45 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 77 pg[6.9( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=53/53 les/c/f=54/54/0 sis=77) [0] r=0 lpr=77 pi=[53,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:45 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 31 05:00:45 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 31 05:00:45 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Jan 31 05:00:45 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Jan 31 05:00:45 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Jan 31 05:00:46 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Jan 31 05:00:46 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 31 05:00:46 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 31 05:00:46 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Jan 31 05:00:46 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Jan 31 05:00:46 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 78 pg[6.9( v 34'39 (0'0,34'39] local-lis/les=77/78 n=1 ec=45/20 lis/c=53/53 les/c/f=54/54/0 sis=77) [0] r=0 lpr=77 pi=[53,77)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:46 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e78 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:00:47 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v161: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 193 B/s, 3 objects/s recovering
Jan 31 05:00:47 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0)
Jan 31 05:00:47 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 31 05:00:47 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Jan 31 05:00:47 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 31 05:00:47 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Jan 31 05:00:47 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Jan 31 05:00:47 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Jan 31 05:00:47 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Jan 31 05:00:47 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Jan 31 05:00:47 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 31 05:00:47 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 31 05:00:47 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Jan 31 05:00:47 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Jan 31 05:00:47 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 79 pg[6.a( v 34'39 (0'0,34'39] local-lis/les=59/60 n=0 ec=45/20 lis/c=59/59 les/c/f=60/60/0 sis=79 pruub=11.755875587s) [0] r=-1 lpr=79 pi=[59,79)/1 crt=34'39 lcod 0'0 active pruub 149.268218994s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:47 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 79 pg[6.a( v 34'39 (0'0,34'39] local-lis/les=59/60 n=0 ec=45/20 lis/c=59/59 les/c/f=60/60/0 sis=79 pruub=11.755776405s) [0] r=-1 lpr=79 pi=[59,79)/1 crt=34'39 lcod 0'0 unknown NOTIFY pruub 149.268218994s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:47 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 79 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=59/59 les/c/f=60/60/0 sis=79) [0] r=0 lpr=79 pi=[59,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:47 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 31 05:00:47 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 31 05:00:47 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Jan 31 05:00:47 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Jan 31 05:00:48 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Jan 31 05:00:48 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Jan 31 05:00:48 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Jan 31 05:00:48 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 31 05:00:48 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 31 05:00:48 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Jan 31 05:00:48 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Jan 31 05:00:48 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 80 pg[6.a( v 34'39 (0'0,34'39] local-lis/les=79/80 n=0 ec=45/20 lis/c=59/59 les/c/f=60/60/0 sis=79) [0] r=0 lpr=79 pi=[59,79)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:48 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 2.e scrub starts
Jan 31 05:00:48 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 2.e scrub ok
Jan 31 05:00:49 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v164: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:00:49 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0)
Jan 31 05:00:49 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 31 05:00:49 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Jan 31 05:00:49 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 31 05:00:49 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Jan 31 05:00:49 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Jan 31 05:00:49 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Jan 31 05:00:49 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 31 05:00:49 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 31 05:00:49 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Jan 31 05:00:49 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Jan 31 05:00:49 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 81 pg[6.b( v 34'39 (0'0,34'39] local-lis/les=61/62 n=1 ec=45/20 lis/c=61/61 les/c/f=62/62/0 sis=81 pruub=8.845438004s) [1] r=-1 lpr=81 pi=[61,81)/1 crt=34'39 active pruub 154.660491943s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:49 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 81 pg[6.b( v 34'39 (0'0,34'39] local-lis/les=61/62 n=1 ec=45/20 lis/c=61/61 les/c/f=62/62/0 sis=81 pruub=8.845388412s) [1] r=-1 lpr=81 pi=[61,81)/1 crt=34'39 unknown NOTIFY pruub 154.660491943s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:49 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 81 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=0 lpr=81 pi=[61,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:49 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 31 05:00:49 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 31 05:00:50 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Jan 31 05:00:50 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 31 05:00:50 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 31 05:00:50 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Jan 31 05:00:50 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Jan 31 05:00:50 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 82 pg[6.b( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=81/82 n=1 ec=45/20 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=0 lpr=81 pi=[61,81)/1 crt=34'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:51 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v167: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:00:51 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:00:51 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 8.a scrub starts
Jan 31 05:00:51 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 8.a scrub ok
Jan 31 05:00:52 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Jan 31 05:00:52 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Jan 31 05:00:52 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Jan 31 05:00:52 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Jan 31 05:00:53 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v168: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:00:53 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Jan 31 05:00:53 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Jan 31 05:00:53 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:00:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:00:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:00:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:00:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:00:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:00:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:00:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:00:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:00:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:00:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:00:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:00:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.911363227468158e-07 of space, bias 4.0, pg target 0.0011893635872961788 quantized to 16 (current 16)
Jan 31 05:00:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:00:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:00:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:00:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:00:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:00:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 05:00:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:00:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:00:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:00:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:00:54 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Jan 31 05:00:54 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Jan 31 05:00:54 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Jan 31 05:00:54 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Jan 31 05:00:55 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v169: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:00:55 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Jan 31 05:00:55 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Jan 31 05:00:55 np0005603787 systemd-logind[786]: New session 35 of user zuul.
Jan 31 05:00:55 np0005603787 systemd[1]: Started Session 35 of User zuul.
Jan 31 05:00:56 np0005603787 python3.9[99593]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 31 05:00:56 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:00:57 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v170: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 31 05:00:57 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0)
Jan 31 05:00:57 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 31 05:00:57 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Jan 31 05:00:57 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 31 05:00:57 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Jan 31 05:00:57 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 31 05:00:57 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 31 05:00:57 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Jan 31 05:00:57 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Jan 31 05:00:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=83 pruub=9.793163300s) [2] r=-1 lpr=83 pi=[49,83)/1 crt=39'483 lcod 0'0 active pruub 156.977264404s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=83 pruub=9.792673111s) [2] r=-1 lpr=83 pi=[49,83)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 156.977264404s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 83 pg[9.1c( v 60'487 (0'0,60'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=83 pruub=9.792558670s) [2] r=-1 lpr=83 pi=[49,83)/1 crt=60'486 lcod 60'486 active pruub 156.977279663s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:57 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 83 pg[9.1c( v 60'487 (0'0,60'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=83 pruub=9.792521477s) [2] r=-1 lpr=83 pi=[49,83)/1 crt=60'486 lcod 60'486 unknown NOTIFY pruub 156.977279663s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:57 np0005603787 python3.9[99767]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:00:57 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Jan 31 05:00:57 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:57 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:57 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Jan 31 05:00:57 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 31 05:00:57 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 31 05:00:58 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 2.f scrub starts
Jan 31 05:00:58 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 2.f scrub ok
Jan 31 05:00:58 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Jan 31 05:00:58 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Jan 31 05:00:58 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Jan 31 05:00:58 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 84 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[49,84)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:58 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 84 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[49,84)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:58 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 84 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[49,84)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:58 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 84 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[49,84)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:00:58 np0005603787 python3.9[99923]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:00:58 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 84 pg[9.1c( v 60'487 (0'0,60'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=84) [2]/[1] r=0 lpr=84 pi=[49,84)/1 crt=60'486 lcod 60'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:58 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 84 pg[9.1c( v 60'487 (0'0,60'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=84) [2]/[1] r=0 lpr=84 pi=[49,84)/1 crt=60'486 lcod 60'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:58 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 84 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=84) [2]/[1] r=0 lpr=84 pi=[49,84)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:00:58 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 84 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=84) [2]/[1] r=0 lpr=84 pi=[49,84)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:00:58 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 31 05:00:58 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 31 05:00:59 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v173: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 31 05:00:59 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0)
Jan 31 05:00:59 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 31 05:00:59 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Jan 31 05:00:59 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 31 05:00:59 np0005603787 python3.9[100076]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:00:59 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Jan 31 05:00:59 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 31 05:00:59 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 31 05:00:59 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Jan 31 05:00:59 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Jan 31 05:00:59 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 3.b scrub starts
Jan 31 05:00:59 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 85 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=84/85 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[49,84)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:59 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 3.b scrub ok
Jan 31 05:00:59 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 85 pg[9.1c( v 60'487 (0'0,60'487] local-lis/les=84/85 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[49,84)/1 crt=60'487 lcod 60'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:00:59 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 31 05:00:59 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 31 05:00:59 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 31 05:00:59 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 31 05:01:00 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 85 pg[6.d( v 34'39 (0'0,34'39] local-lis/les=65/66 n=1 ec=45/20 lis/c=65/65 les/c/f=66/66/0 sis=85 pruub=8.450965881s) [1] r=-1 lpr=85 pi=[65,85)/1 crt=34'39 active pruub 164.496841431s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:01:00 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 85 pg[6.d( v 34'39 (0'0,34'39] local-lis/les=65/66 n=1 ec=45/20 lis/c=65/65 les/c/f=66/66/0 sis=85 pruub=8.450886726s) [1] r=-1 lpr=85 pi=[65,85)/1 crt=34'39 unknown NOTIFY pruub 164.496841431s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:01:00 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 85 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=65/65 les/c/f=66/66/0 sis=85) [1] r=0 lpr=85 pi=[65,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:01:00 np0005603787 python3.9[100230]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:01:00 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Jan 31 05:01:00 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Jan 31 05:01:00 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Jan 31 05:01:00 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 86 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=84/85 n=7 ec=49/33 lis/c=84/49 les/c/f=85/50/0 sis=86 pruub=14.972255707s) [2] async=[2] r=-1 lpr=86 pi=[49,86)/1 crt=39'483 lcod 0'0 active pruub 165.152343750s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:01:00 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 86 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=84/85 n=7 ec=49/33 lis/c=84/49 les/c/f=85/50/0 sis=86 pruub=14.971831322s) [2] r=-1 lpr=86 pi=[49,86)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 165.152343750s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:01:00 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 86 pg[9.1c( v 60'487 (0'0,60'487] local-lis/les=84/85 n=6 ec=49/33 lis/c=84/49 les/c/f=85/50/0 sis=86 pruub=14.977382660s) [2] async=[2] r=-1 lpr=86 pi=[49,86)/1 crt=60'487 lcod 60'486 active pruub 165.158218384s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:01:00 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 86 pg[9.1c( v 60'487 (0'0,60'487] local-lis/les=84/85 n=6 ec=49/33 lis/c=84/49 les/c/f=85/50/0 sis=86 pruub=14.977269173s) [2] r=-1 lpr=86 pi=[49,86)/1 crt=60'487 lcod 60'486 unknown NOTIFY pruub 165.158218384s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:01:00 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 86 pg[6.d( v 34'39 lc 32'13 (0'0,34'39] local-lis/les=85/86 n=1 ec=45/20 lis/c=65/65 les/c/f=66/66/0 sis=85) [1] r=0 lpr=85 pi=[65,85)/1 crt=34'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:01:00 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 86 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=84/49 les/c/f=85/50/0 sis=86) [2] r=0 lpr=86 pi=[49,86)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:01:00 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 86 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=84/49 les/c/f=85/50/0 sis=86) [2] r=0 lpr=86 pi=[49,86)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:01:00 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 86 pg[9.1c( v 60'487 (0'0,60'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=84/49 les/c/f=85/50/0 sis=86) [2] r=0 lpr=86 pi=[49,86)/1 pct=0'0 crt=60'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:01:00 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 86 pg[9.1c( v 60'487 (0'0,60'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=84/49 les/c/f=85/50/0 sis=86) [2] r=0 lpr=86 pi=[49,86)/1 crt=60'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:01:00 np0005603787 python3.9[100382]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:01:01 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v176: 305 pgs: 2 active+remapped, 1 peering, 302 active+clean; 462 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 104 B/s, 3 objects/s recovering
Jan 31 05:01:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Jan 31 05:01:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Jan 31 05:01:01 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Jan 31 05:01:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 87 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=86/87 n=7 ec=49/33 lis/c=84/49 les/c/f=85/50/0 sis=86) [2] r=0 lpr=86 pi=[49,86)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:01:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 87 pg[9.1c( v 60'487 (0'0,60'487] local-lis/les=86/87 n=6 ec=49/33 lis/c=84/49 les/c/f=85/50/0 sis=86) [2] r=0 lpr=86 pi=[49,86)/1 crt=60'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:01:01 np0005603787 python3.9[100547]: ansible-ansible.builtin.service_facts Invoked
Jan 31 05:01:01 np0005603787 network[100564]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 05:01:01 np0005603787 network[100565]: 'network-scripts' will be removed from distribution in near future.
Jan 31 05:01:01 np0005603787 network[100566]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 05:01:01 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 5.a scrub starts
Jan 31 05:01:01 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 5.a scrub ok
Jan 31 05:01:02 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 11.c scrub starts
Jan 31 05:01:02 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 11.c scrub ok
Jan 31 05:01:02 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 2.c scrub starts
Jan 31 05:01:02 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 2.c scrub ok
Jan 31 05:01:03 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v178: 305 pgs: 2 active+remapped, 1 peering, 302 active+clean; 462 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 88 B/s, 2 objects/s recovering
Jan 31 05:01:03 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Jan 31 05:01:03 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Jan 31 05:01:04 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 2.b scrub starts
Jan 31 05:01:04 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 2.b scrub ok
Jan 31 05:01:04 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Jan 31 05:01:04 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Jan 31 05:01:05 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v179: 305 pgs: 1 peering, 304 active+clean; 462 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 2 objects/s recovering
Jan 31 05:01:05 np0005603787 python3.9[100826]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:01:05 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Jan 31 05:01:05 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Jan 31 05:01:05 np0005603787 python3.9[100976]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:01:06 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 11.a scrub starts
Jan 31 05:01:06 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 11.a scrub ok
Jan 31 05:01:06 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:01:07 np0005603787 python3.9[101130]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:01:07 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v180: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 462 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 63 B/s, 2 objects/s recovering
Jan 31 05:01:07 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0)
Jan 31 05:01:07 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 31 05:01:07 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Jan 31 05:01:07 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 31 05:01:08 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Jan 31 05:01:08 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 31 05:01:08 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 31 05:01:08 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 31 05:01:08 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 31 05:01:08 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Jan 31 05:01:08 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Jan 31 05:01:08 np0005603787 python3.9[101288]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 05:01:09 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 31 05:01:09 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 31 05:01:09 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v182: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 462 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 8 B/s, 0 objects/s recovering
Jan 31 05:01:09 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0)
Jan 31 05:01:09 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 31 05:01:09 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Jan 31 05:01:09 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 31 05:01:09 np0005603787 python3.9[101372]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 05:01:09 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 5.b scrub starts
Jan 31 05:01:10 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 5.b scrub ok
Jan 31 05:01:10 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Jan 31 05:01:10 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 31 05:01:10 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 31 05:01:10 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 31 05:01:10 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 31 05:01:10 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Jan 31 05:01:10 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Jan 31 05:01:10 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 89 pg[6.f( v 34'39 (0'0,34'39] local-lis/les=61/62 n=1 ec=45/20 lis/c=61/61 les/c/f=62/62/0 sis=89 pruub=12.356919289s) [2] r=-1 lpr=89 pi=[61,89)/1 crt=34'39 active pruub 178.664306641s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:01:10 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 89 pg[6.f( v 34'39 (0'0,34'39] local-lis/les=61/62 n=1 ec=45/20 lis/c=61/61 les/c/f=62/62/0 sis=89 pruub=12.356835365s) [2] r=-1 lpr=89 pi=[61,89)/1 crt=34'39 unknown NOTIFY pruub 178.664306641s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:01:10 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Jan 31 05:01:10 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Jan 31 05:01:10 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 89 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=61/61 les/c/f=62/62/0 sis=89) [2] r=0 lpr=89 pi=[61,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:01:11 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Jan 31 05:01:11 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Jan 31 05:01:11 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v184: 305 pgs: 305 active+clean; 462 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 8 B/s, 0 objects/s recovering
Jan 31 05:01:11 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Jan 31 05:01:11 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Jan 31 05:01:11 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Jan 31 05:01:11 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 31 05:01:11 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Jan 31 05:01:11 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 31 05:01:11 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 31 05:01:11 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Jan 31 05:01:11 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Jan 31 05:01:11 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 90 pg[6.f( v 34'39 lc 32'1 (0'0,34'39] local-lis/les=89/90 n=1 ec=45/20 lis/c=61/61 les/c/f=62/62/0 sis=89) [2] r=0 lpr=89 pi=[61,89)/1 crt=34'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:01:12 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 7.d scrub starts
Jan 31 05:01:12 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 7.d scrub ok
Jan 31 05:01:12 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 31 05:01:13 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v186: 305 pgs: 305 active+clean; 462 KiB data, 118 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:01:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Jan 31 05:01:13 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Jan 31 05:01:13 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Jan 31 05:01:13 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Jan 31 05:01:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Jan 31 05:01:13 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 31 05:01:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Jan 31 05:01:13 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Jan 31 05:01:13 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Jan 31 05:01:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:01:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:01:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:01:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:01:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:01:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:01:14 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Jan 31 05:01:14 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Jan 31 05:01:14 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 31 05:01:15 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Jan 31 05:01:15 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Jan 31 05:01:15 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Jan 31 05:01:15 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Jan 31 05:01:15 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v188: 305 pgs: 305 active+clean; 462 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 102 B/s, 0 objects/s recovering
Jan 31 05:01:15 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Jan 31 05:01:15 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Jan 31 05:01:15 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Jan 31 05:01:15 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Jan 31 05:01:15 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Jan 31 05:01:15 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 31 05:01:15 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Jan 31 05:01:15 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Jan 31 05:01:15 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Jan 31 05:01:16 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 3.d scrub starts
Jan 31 05:01:16 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 3.d scrub ok
Jan 31 05:01:16 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:01:16 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 31 05:01:17 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Jan 31 05:01:17 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Jan 31 05:01:17 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v190: 305 pgs: 305 active+clean; 462 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 102 B/s, 0 objects/s recovering
Jan 31 05:01:17 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Jan 31 05:01:17 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Jan 31 05:01:17 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Jan 31 05:01:17 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Jan 31 05:01:17 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Jan 31 05:01:17 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 31 05:01:17 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Jan 31 05:01:17 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Jan 31 05:01:17 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 93 pg[9.13( v 60'485 (0'0,60'485] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=93 pruub=8.795087814s) [2] r=-1 lpr=93 pi=[57,93)/1 crt=60'484 lcod 60'484 active pruub 182.496795654s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:01:17 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 93 pg[9.13( v 60'485 (0'0,60'485] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=93 pruub=8.795028687s) [2] r=-1 lpr=93 pi=[57,93)/1 crt=60'484 lcod 60'484 unknown NOTIFY pruub 182.496795654s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:01:17 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Jan 31 05:01:17 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=93) [2] r=0 lpr=93 pi=[57,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:01:18 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Jan 31 05:01:18 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Jan 31 05:01:19 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Jan 31 05:01:19 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 94 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[57,94)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:01:19 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 94 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[57,94)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:01:19 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 94 pg[9.13( v 60'485 (0'0,60'485] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=94) [2]/[0] r=0 lpr=94 pi=[57,94)/1 crt=60'484 lcod 60'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:01:19 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 31 05:01:19 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 94 pg[9.13( v 60'485 (0'0,60'485] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=94) [2]/[0] r=0 lpr=94 pi=[57,94)/1 crt=60'484 lcod 60'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:01:19 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v193: 305 pgs: 1 unknown, 304 active+clean; 462 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:01:19 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Jan 31 05:01:19 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Jan 31 05:01:20 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Jan 31 05:01:20 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Jan 31 05:01:20 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Jan 31 05:01:20 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Jan 31 05:01:20 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Jan 31 05:01:20 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 95 pg[9.13( v 60'485 (0'0,60'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=94) [2]/[0] async=[2] r=0 lpr=94 pi=[57,94)/1 crt=60'485 lcod 60'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:01:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Jan 31 05:01:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Jan 31 05:01:21 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v196: 305 pgs: 1 unknown, 304 active+clean; 462 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:01:21 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Jan 31 05:01:21 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 96 pg[9.13( v 60'485 (0'0,60'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/57 les/c/f=95/58/0 sis=96 pruub=14.996465683s) [2] async=[2] r=-1 lpr=96 pi=[57,96)/1 crt=60'485 lcod 60'484 active pruub 192.083694458s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:01:21 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 96 pg[9.13( v 60'485 (0'0,60'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/57 les/c/f=95/58/0 sis=96 pruub=14.996387482s) [2] r=-1 lpr=96 pi=[57,96)/1 crt=60'485 lcod 60'484 unknown NOTIFY pruub 192.083694458s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:01:21 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 96 pg[9.13( v 60'485 (0'0,60'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/57 les/c/f=95/58/0 sis=96) [2] r=0 lpr=96 pi=[57,96)/1 pct=0'0 crt=60'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:01:21 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 96 pg[9.13( v 60'485 (0'0,60'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/57 les/c/f=95/58/0 sis=96) [2] r=0 lpr=96 pi=[57,96)/1 crt=60'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:01:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:01:22 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Jan 31 05:01:22 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Jan 31 05:01:22 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 97 pg[9.13( v 60'485 (0'0,60'485] local-lis/les=96/97 n=6 ec=49/33 lis/c=94/57 les/c/f=95/58/0 sis=96) [2] r=0 lpr=96 pi=[57,96)/1 crt=60'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:01:22 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Jan 31 05:01:22 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Jan 31 05:01:22 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Jan 31 05:01:23 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Jan 31 05:01:23 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Jan 31 05:01:23 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v198: 305 pgs: 1 unknown, 304 active+clean; 462 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:01:23 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Jan 31 05:01:23 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Jan 31 05:01:25 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v199: 305 pgs: 305 active+clean; 462 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 170 B/s wr, 8 op/s; 87 B/s, 1 objects/s recovering
Jan 31 05:01:25 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Jan 31 05:01:25 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Jan 31 05:01:25 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Jan 31 05:01:25 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Jan 31 05:01:25 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 31 05:01:25 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Jan 31 05:01:25 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Jan 31 05:01:25 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Jan 31 05:01:25 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Jan 31 05:01:26 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Jan 31 05:01:26 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Jan 31 05:01:26 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Jan 31 05:01:26 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Jan 31 05:01:26 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 31 05:01:26 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:01:26 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Jan 31 05:01:26 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Jan 31 05:01:27 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Jan 31 05:01:27 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Jan 31 05:01:27 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v201: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 341 B/s wr, 8 op/s; 87 B/s, 1 objects/s recovering
Jan 31 05:01:27 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Jan 31 05:01:27 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Jan 31 05:01:27 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Jan 31 05:01:27 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 31 05:01:27 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Jan 31 05:01:27 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Jan 31 05:01:27 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Jan 31 05:01:27 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 99 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99 pruub=13.641047478s) [1] r=-1 lpr=99 pi=[56,99)/1 crt=39'483 active pruub 197.485244751s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:01:27 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 99 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99 pruub=13.640983582s) [1] r=-1 lpr=99 pi=[56,99)/1 crt=39'483 unknown NOTIFY pruub 197.485244751s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:01:27 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1] r=0 lpr=99 pi=[56,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:01:27 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Jan 31 05:01:27 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Jan 31 05:01:28 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Jan 31 05:01:28 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Jan 31 05:01:28 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Jan 31 05:01:28 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[56,100)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:01:28 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[56,100)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:01:28 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 31 05:01:28 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 100 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=100) [1]/[0] r=0 lpr=100 pi=[56,100)/1 crt=39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:01:28 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 100 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=100) [1]/[0] r=0 lpr=100 pi=[56,100)/1 crt=39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:01:29 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v204: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 341 B/s wr, 8 op/s; 87 B/s, 1 objects/s recovering
Jan 31 05:01:29 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Jan 31 05:01:29 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Jan 31 05:01:29 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Jan 31 05:01:29 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 31 05:01:29 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Jan 31 05:01:29 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Jan 31 05:01:29 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Jan 31 05:01:29 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Jan 31 05:01:29 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Jan 31 05:01:30 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=100/101 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=100) [1]/[0] async=[1] r=0 lpr=100 pi=[56,100)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:01:30 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Jan 31 05:01:30 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 31 05:01:30 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Jan 31 05:01:30 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Jan 31 05:01:30 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 102 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=100/101 n=6 ec=49/33 lis/c=100/56 les/c/f=101/57/0 sis=102 pruub=15.901243210s) [1] async=[1] r=-1 lpr=102 pi=[56,102)/1 crt=39'483 active pruub 202.265090942s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:01:30 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 102 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=100/101 n=6 ec=49/33 lis/c=100/56 les/c/f=101/57/0 sis=102 pruub=15.901005745s) [1] r=-1 lpr=102 pi=[56,102)/1 crt=39'483 unknown NOTIFY pruub 202.265090942s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:01:30 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 102 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=100/56 les/c/f=101/57/0 sis=102) [1] r=0 lpr=102 pi=[56,102)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:01:30 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 102 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=100/56 les/c/f=101/57/0 sis=102) [1] r=0 lpr=102 pi=[56,102)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:01:31 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 10.a scrub starts
Jan 31 05:01:31 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 10.a scrub ok
Jan 31 05:01:31 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v207: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:01:31 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Jan 31 05:01:31 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Jan 31 05:01:31 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Jan 31 05:01:31 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 31 05:01:31 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Jan 31 05:01:31 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Jan 31 05:01:31 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Jan 31 05:01:31 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 101 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=101 pruub=13.045442581s) [0] r=-1 lpr=101 pi=[69,101)/1 crt=39'483 active pruub 187.165649414s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:01:31 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=101 pruub=13.045361519s) [0] r=-1 lpr=101 pi=[69,101)/1 crt=39'483 unknown NOTIFY pruub 187.165649414s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:01:31 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 103 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=102/103 n=6 ec=49/33 lis/c=100/56 les/c/f=101/57/0 sis=102) [1] r=0 lpr=102 pi=[56,102)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:01:31 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 103 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=101) [0] r=0 lpr=103 pi=[69,101)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:01:32 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 10.c scrub starts
Jan 31 05:01:32 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 10.c scrub ok
Jan 31 05:01:32 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Jan 31 05:01:32 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 31 05:01:32 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Jan 31 05:01:32 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Jan 31 05:01:32 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 104 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=104) [0]/[2] r=-1 lpr=104 pi=[69,104)/2 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:01:32 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 104 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=104) [0]/[2] r=-1 lpr=104 pi=[69,104)/2 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:01:32 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 104 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=104) [0]/[2] r=0 lpr=104 pi=[69,104)/2 crt=39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:01:32 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 104 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=104) [0]/[2] r=0 lpr=104 pi=[69,104)/2 crt=39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:01:33 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 5.e scrub starts
Jan 31 05:01:33 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 5.e scrub ok
Jan 31 05:01:33 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v210: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:01:33 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Jan 31 05:01:33 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Jan 31 05:01:33 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Jan 31 05:01:33 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 31 05:01:33 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Jan 31 05:01:33 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Jan 31 05:01:33 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Jan 31 05:01:33 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 105 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=104/105 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[69,104)/2 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:01:34 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 5.d scrub starts
Jan 31 05:01:34 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 5.d scrub ok
Jan 31 05:01:34 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Jan 31 05:01:34 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 31 05:01:34 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Jan 31 05:01:34 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Jan 31 05:01:34 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 106 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=104/105 n=6 ec=49/33 lis/c=104/69 les/c/f=105/70/0 sis=106 pruub=15.347133636s) [0] async=[0] r=-1 lpr=106 pi=[69,106)/2 crt=39'483 active pruub 192.581359863s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:01:34 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 106 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=104/105 n=6 ec=49/33 lis/c=104/69 les/c/f=105/70/0 sis=106 pruub=15.347033501s) [0] r=-1 lpr=106 pi=[69,106)/2 crt=39'483 unknown NOTIFY pruub 192.581359863s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:01:34 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 106 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=104/69 les/c/f=105/70/0 sis=106) [0] r=0 lpr=106 pi=[69,106)/2 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:01:34 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 106 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=104/69 les/c/f=105/70/0 sis=106) [0] r=0 lpr=106 pi=[69,106)/2 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:01:35 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v213: 305 pgs: 1 activating+remapped, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 4/248 objects misplaced (1.613%)
Jan 31 05:01:35 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Jan 31 05:01:35 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Jan 31 05:01:35 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Jan 31 05:01:35 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 107 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=106/107 n=6 ec=49/33 lis/c=104/69 les/c/f=105/70/0 sis=106) [0] r=0 lpr=106 pi=[69,106)/2 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:01:36 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 7.b scrub starts
Jan 31 05:01:36 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 7.b scrub ok
Jan 31 05:01:36 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:01:37 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v215: 305 pgs: 1 activating+remapped, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 4/248 objects misplaced (1.613%); 23 B/s, 0 objects/s recovering
Jan 31 05:01:37 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Jan 31 05:01:37 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Jan 31 05:01:38 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Jan 31 05:01:38 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Jan 31 05:01:39 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v216: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 1 objects/s recovering
Jan 31 05:01:39 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Jan 31 05:01:39 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Jan 31 05:01:39 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Jan 31 05:01:39 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 31 05:01:39 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Jan 31 05:01:39 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Jan 31 05:01:39 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 108 pg[9.19( v 60'487 (0'0,60'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108 pruub=10.942340851s) [2] r=-1 lpr=108 pi=[57,108)/1 crt=60'486 lcod 60'486 active pruub 206.498580933s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:01:39 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 108 pg[9.19( v 60'487 (0'0,60'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108 pruub=10.942297935s) [2] r=-1 lpr=108 pi=[57,108)/1 crt=60'486 lcod 60'486 unknown NOTIFY pruub 206.498580933s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:01:39 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 108 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2] r=0 lpr=108 pi=[57,108)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:01:39 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Jan 31 05:01:40 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Jan 31 05:01:40 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Jan 31 05:01:40 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Jan 31 05:01:40 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 109 pg[9.19( v 60'487 (0'0,60'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=109) [2]/[0] r=0 lpr=109 pi=[57,109)/1 crt=60'486 lcod 60'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:01:40 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 109 pg[9.19( v 60'487 (0'0,60'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=109) [2]/[0] r=0 lpr=109 pi=[57,109)/1 crt=60'486 lcod 60'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:01:40 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 109 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=109) [2]/[0] r=-1 lpr=109 pi=[57,109)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:01:40 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 109 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=109) [2]/[0] r=-1 lpr=109 pi=[57,109)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:01:40 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 31 05:01:40 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:01:40 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:01:40 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:01:40 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:01:40 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:01:40 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:01:40 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:01:40 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:01:41 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:01:41 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:01:41 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:01:41 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:01:41 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Jan 31 05:01:41 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Jan 31 05:01:41 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v219: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 1 objects/s recovering
Jan 31 05:01:41 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Jan 31 05:01:41 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Jan 31 05:01:41 np0005603787 podman[101663]: 2026-01-31 10:01:41.356691093 +0000 UTC m=+0.069801494 container create 9bd8be8f81f160b5709399d330f69a83a255a42941bff89e0e2f9bce4ad9bb57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:01:41 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:01:41 np0005603787 podman[101663]: 2026-01-31 10:01:41.305862884 +0000 UTC m=+0.018973315 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:01:41 np0005603787 systemd[1]: Started libpod-conmon-9bd8be8f81f160b5709399d330f69a83a255a42941bff89e0e2f9bce4ad9bb57.scope.
Jan 31 05:01:41 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:01:41 np0005603787 podman[101663]: 2026-01-31 10:01:41.467229008 +0000 UTC m=+0.180339429 container init 9bd8be8f81f160b5709399d330f69a83a255a42941bff89e0e2f9bce4ad9bb57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_galileo, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 05:01:41 np0005603787 podman[101663]: 2026-01-31 10:01:41.473473775 +0000 UTC m=+0.186584176 container start 9bd8be8f81f160b5709399d330f69a83a255a42941bff89e0e2f9bce4ad9bb57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_galileo, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 05:01:41 np0005603787 sharp_galileo[101679]: 167 167
Jan 31 05:01:41 np0005603787 systemd[1]: libpod-9bd8be8f81f160b5709399d330f69a83a255a42941bff89e0e2f9bce4ad9bb57.scope: Deactivated successfully.
Jan 31 05:01:41 np0005603787 podman[101663]: 2026-01-31 10:01:41.491302817 +0000 UTC m=+0.204413238 container attach 9bd8be8f81f160b5709399d330f69a83a255a42941bff89e0e2f9bce4ad9bb57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 05:01:41 np0005603787 podman[101663]: 2026-01-31 10:01:41.493098626 +0000 UTC m=+0.206209047 container died 9bd8be8f81f160b5709399d330f69a83a255a42941bff89e0e2f9bce4ad9bb57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_galileo, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:01:41 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Jan 31 05:01:41 np0005603787 systemd[1]: var-lib-containers-storage-overlay-6a6b6308f85f99f8f9fafda149f0d8426109475cc8779b163ce95272052b9aff-merged.mount: Deactivated successfully.
Jan 31 05:01:41 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 31 05:01:41 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Jan 31 05:01:41 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Jan 31 05:01:41 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:01:41 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:01:41 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:01:41 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Jan 31 05:01:41 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 31 05:01:41 np0005603787 podman[101663]: 2026-01-31 10:01:41.626774314 +0000 UTC m=+0.339884755 container remove 9bd8be8f81f160b5709399d330f69a83a255a42941bff89e0e2f9bce4ad9bb57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_galileo, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 05:01:41 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 110 pg[9.19( v 60'487 (0'0,60'487] local-lis/les=109/110 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=109) [2]/[0] async=[2] r=0 lpr=109 pi=[57,109)/1 crt=60'487 lcod 60'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:01:41 np0005603787 systemd[1]: libpod-conmon-9bd8be8f81f160b5709399d330f69a83a255a42941bff89e0e2f9bce4ad9bb57.scope: Deactivated successfully.
Jan 31 05:01:41 np0005603787 podman[101705]: 2026-01-31 10:01:41.799013778 +0000 UTC m=+0.080250122 container create d509763e293c4c82c27eb1d28c0ffba01eadc7ab73d92c7e3347ebbd0dab6829 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:01:41 np0005603787 podman[101705]: 2026-01-31 10:01:41.745068636 +0000 UTC m=+0.026305000 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:01:41 np0005603787 systemd[1]: Started libpod-conmon-d509763e293c4c82c27eb1d28c0ffba01eadc7ab73d92c7e3347ebbd0dab6829.scope.
Jan 31 05:01:41 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:01:41 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8b626bbd9f987271e7cf952888a8a652698205151450181f0d04fa8da26f13d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:01:41 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8b626bbd9f987271e7cf952888a8a652698205151450181f0d04fa8da26f13d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:01:41 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8b626bbd9f987271e7cf952888a8a652698205151450181f0d04fa8da26f13d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:01:41 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8b626bbd9f987271e7cf952888a8a652698205151450181f0d04fa8da26f13d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:01:41 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8b626bbd9f987271e7cf952888a8a652698205151450181f0d04fa8da26f13d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:01:41 np0005603787 podman[101705]: 2026-01-31 10:01:41.929063301 +0000 UTC m=+0.210299665 container init d509763e293c4c82c27eb1d28c0ffba01eadc7ab73d92c7e3347ebbd0dab6829 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 31 05:01:41 np0005603787 podman[101705]: 2026-01-31 10:01:41.935468891 +0000 UTC m=+0.216705235 container start d509763e293c4c82c27eb1d28c0ffba01eadc7ab73d92c7e3347ebbd0dab6829 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_jepsen, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:01:41 np0005603787 podman[101705]: 2026-01-31 10:01:41.944781028 +0000 UTC m=+0.226017372 container attach d509763e293c4c82c27eb1d28c0ffba01eadc7ab73d92c7e3347ebbd0dab6829 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 05:01:42 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Jan 31 05:01:42 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Jan 31 05:01:42 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Jan 31 05:01:42 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Jan 31 05:01:42 np0005603787 adoring_jepsen[101722]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:01:42 np0005603787 adoring_jepsen[101722]: --> All data devices are unavailable
Jan 31 05:01:42 np0005603787 podman[101705]: 2026-01-31 10:01:42.361550893 +0000 UTC m=+0.642787247 container died d509763e293c4c82c27eb1d28c0ffba01eadc7ab73d92c7e3347ebbd0dab6829 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:01:42 np0005603787 systemd[1]: libpod-d509763e293c4c82c27eb1d28c0ffba01eadc7ab73d92c7e3347ebbd0dab6829.scope: Deactivated successfully.
Jan 31 05:01:42 np0005603787 systemd[1]: var-lib-containers-storage-overlay-a8b626bbd9f987271e7cf952888a8a652698205151450181f0d04fa8da26f13d-merged.mount: Deactivated successfully.
Jan 31 05:01:42 np0005603787 podman[101705]: 2026-01-31 10:01:42.422037749 +0000 UTC m=+0.703274093 container remove d509763e293c4c82c27eb1d28c0ffba01eadc7ab73d92c7e3347ebbd0dab6829 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_jepsen, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Jan 31 05:01:42 np0005603787 systemd[1]: libpod-conmon-d509763e293c4c82c27eb1d28c0ffba01eadc7ab73d92c7e3347ebbd0dab6829.scope: Deactivated successfully.
Jan 31 05:01:42 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Jan 31 05:01:42 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Jan 31 05:01:42 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Jan 31 05:01:42 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 111 pg[9.19( v 60'487 (0'0,60'487] local-lis/les=109/110 n=6 ec=49/33 lis/c=109/57 les/c/f=110/58/0 sis=111 pruub=14.978886604s) [2] async=[2] r=-1 lpr=111 pi=[57,111)/1 crt=60'487 lcod 60'486 active pruub 213.617279053s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:01:42 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 111 pg[9.19( v 60'487 (0'0,60'487] local-lis/les=109/110 n=6 ec=49/33 lis/c=109/57 les/c/f=110/58/0 sis=111 pruub=14.978693008s) [2] r=-1 lpr=111 pi=[57,111)/1 crt=60'487 lcod 60'486 unknown NOTIFY pruub 213.617279053s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:01:42 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 111 pg[9.19( v 60'487 (0'0,60'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=109/57 les/c/f=110/58/0 sis=111) [2] r=0 lpr=111 pi=[57,111)/1 pct=0'0 crt=60'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:01:42 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 111 pg[9.19( v 60'487 (0'0,60'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=109/57 les/c/f=110/58/0 sis=111) [2] r=0 lpr=111 pi=[57,111)/1 crt=60'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:01:42 np0005603787 podman[101816]: 2026-01-31 10:01:42.80699923 +0000 UTC m=+0.039436798 container create e5d2b964fe971fc440989f5f08df26dedb4a2f9b5f5fd6bc368442d9ad36c7d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_bose, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 05:01:42 np0005603787 systemd[1]: Started libpod-conmon-e5d2b964fe971fc440989f5f08df26dedb4a2f9b5f5fd6bc368442d9ad36c7d2.scope.
Jan 31 05:01:42 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:01:42 np0005603787 podman[101816]: 2026-01-31 10:01:42.79079009 +0000 UTC m=+0.023227668 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:01:42 np0005603787 podman[101816]: 2026-01-31 10:01:42.887314542 +0000 UTC m=+0.119752190 container init e5d2b964fe971fc440989f5f08df26dedb4a2f9b5f5fd6bc368442d9ad36c7d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_bose, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:01:42 np0005603787 podman[101816]: 2026-01-31 10:01:42.895917402 +0000 UTC m=+0.128354960 container start e5d2b964fe971fc440989f5f08df26dedb4a2f9b5f5fd6bc368442d9ad36c7d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_bose, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 05:01:42 np0005603787 elastic_bose[101832]: 167 167
Jan 31 05:01:42 np0005603787 podman[101816]: 2026-01-31 10:01:42.901555871 +0000 UTC m=+0.133993449 container attach e5d2b964fe971fc440989f5f08df26dedb4a2f9b5f5fd6bc368442d9ad36c7d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 05:01:42 np0005603787 systemd[1]: libpod-e5d2b964fe971fc440989f5f08df26dedb4a2f9b5f5fd6bc368442d9ad36c7d2.scope: Deactivated successfully.
Jan 31 05:01:42 np0005603787 podman[101816]: 2026-01-31 10:01:42.903521493 +0000 UTC m=+0.135959081 container died e5d2b964fe971fc440989f5f08df26dedb4a2f9b5f5fd6bc368442d9ad36c7d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 05:01:42 np0005603787 systemd[1]: var-lib-containers-storage-overlay-ba3610e02c309b88237fee5f86a8a8c76d34601f695ed288d6d9013a755c6491-merged.mount: Deactivated successfully.
Jan 31 05:01:42 np0005603787 podman[101816]: 2026-01-31 10:01:42.945374584 +0000 UTC m=+0.177812142 container remove e5d2b964fe971fc440989f5f08df26dedb4a2f9b5f5fd6bc368442d9ad36c7d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_bose, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 05:01:42 np0005603787 systemd[1]: libpod-conmon-e5d2b964fe971fc440989f5f08df26dedb4a2f9b5f5fd6bc368442d9ad36c7d2.scope: Deactivated successfully.
Jan 31 05:01:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_10:01:43
Jan 31 05:01:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:01:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] do_upmap
Jan 31 05:01:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', 'default.rgw.log', 'vms', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', '.mgr', 'images', 'backups']
Jan 31 05:01:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:01:43 np0005603787 podman[101857]: 2026-01-31 10:01:43.061328433 +0000 UTC m=+0.041292628 container create 99c60453136687077ec537f553b871e7ddd5130f24a60135a7bd34794bdcfa69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:01:43 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Jan 31 05:01:43 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Jan 31 05:01:43 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v222: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:01:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Jan 31 05:01:43 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Jan 31 05:01:43 np0005603787 systemd[1]: Started libpod-conmon-99c60453136687077ec537f553b871e7ddd5130f24a60135a7bd34794bdcfa69.scope.
Jan 31 05:01:43 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Jan 31 05:01:43 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Jan 31 05:01:43 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:01:43 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb44fa0f76b6671c4dd96db4fb93b94506c0ba099f6d7b7ad11a82c1f314b2f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:01:43 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb44fa0f76b6671c4dd96db4fb93b94506c0ba099f6d7b7ad11a82c1f314b2f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:01:43 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb44fa0f76b6671c4dd96db4fb93b94506c0ba099f6d7b7ad11a82c1f314b2f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:01:43 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb44fa0f76b6671c4dd96db4fb93b94506c0ba099f6d7b7ad11a82c1f314b2f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:01:43 np0005603787 podman[101857]: 2026-01-31 10:01:43.039792041 +0000 UTC m=+0.019756266 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:01:43 np0005603787 podman[101857]: 2026-01-31 10:01:43.149360651 +0000 UTC m=+0.129324866 container init 99c60453136687077ec537f553b871e7ddd5130f24a60135a7bd34794bdcfa69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_sinoussi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 05:01:43 np0005603787 podman[101857]: 2026-01-31 10:01:43.156303435 +0000 UTC m=+0.136267630 container start 99c60453136687077ec537f553b871e7ddd5130f24a60135a7bd34794bdcfa69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_sinoussi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:01:43 np0005603787 podman[101857]: 2026-01-31 10:01:43.162246432 +0000 UTC m=+0.142210637 container attach 99c60453136687077ec537f553b871e7ddd5130f24a60135a7bd34794bdcfa69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]: {
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:    "0": [
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:        {
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:            "devices": [
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "/dev/loop3"
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:            ],
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:            "lv_name": "ceph_lv0",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:            "lv_size": "21470642176",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:            "name": "ceph_lv0",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:            "tags": {
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "ceph.cluster_name": "ceph",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "ceph.crush_device_class": "",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "ceph.encrypted": "0",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "ceph.objectstore": "bluestore",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "ceph.osd_id": "0",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "ceph.type": "block",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "ceph.vdo": "0",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "ceph.with_tpm": "0"
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:            },
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:            "type": "block",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:            "vg_name": "ceph_vg0"
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:        }
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:    ],
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:    "1": [
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:        {
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:            "devices": [
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "/dev/loop4"
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:            ],
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:            "lv_name": "ceph_lv1",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:            "lv_size": "21470642176",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:            "name": "ceph_lv1",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:            "tags": {
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "ceph.cluster_name": "ceph",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "ceph.crush_device_class": "",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "ceph.encrypted": "0",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "ceph.objectstore": "bluestore",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "ceph.osd_id": "1",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "ceph.type": "block",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "ceph.vdo": "0",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "ceph.with_tpm": "0"
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:            },
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:            "type": "block",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:            "vg_name": "ceph_vg1"
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:        }
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:    ],
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:    "2": [
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:        {
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:            "devices": [
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "/dev/loop5"
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:            ],
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:            "lv_name": "ceph_lv2",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:            "lv_size": "21470642176",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:            "name": "ceph_lv2",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:            "tags": {
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "ceph.cluster_name": "ceph",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "ceph.crush_device_class": "",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "ceph.encrypted": "0",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "ceph.objectstore": "bluestore",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "ceph.osd_id": "2",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "ceph.type": "block",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "ceph.vdo": "0",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:                "ceph.with_tpm": "0"
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:            },
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:            "type": "block",
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:            "vg_name": "ceph_vg2"
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:        }
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]:    ]
Jan 31 05:01:43 np0005603787 hardcore_sinoussi[101874]: }
Jan 31 05:01:43 np0005603787 systemd[1]: libpod-99c60453136687077ec537f553b871e7ddd5130f24a60135a7bd34794bdcfa69.scope: Deactivated successfully.
Jan 31 05:01:43 np0005603787 podman[101857]: 2026-01-31 10:01:43.458129188 +0000 UTC m=+0.438093383 container died 99c60453136687077ec537f553b871e7ddd5130f24a60135a7bd34794bdcfa69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 05:01:43 np0005603787 systemd[1]: var-lib-containers-storage-overlay-cb44fa0f76b6671c4dd96db4fb93b94506c0ba099f6d7b7ad11a82c1f314b2f1-merged.mount: Deactivated successfully.
Jan 31 05:01:43 np0005603787 podman[101857]: 2026-01-31 10:01:43.525000984 +0000 UTC m=+0.504965179 container remove 99c60453136687077ec537f553b871e7ddd5130f24a60135a7bd34794bdcfa69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_sinoussi, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:01:43 np0005603787 systemd[1]: libpod-conmon-99c60453136687077ec537f553b871e7ddd5130f24a60135a7bd34794bdcfa69.scope: Deactivated successfully.
Jan 31 05:01:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Jan 31 05:01:43 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Jan 31 05:01:43 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 31 05:01:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Jan 31 05:01:43 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Jan 31 05:01:43 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 112 pg[9.19( v 60'487 (0'0,60'487] local-lis/les=111/112 n=6 ec=49/33 lis/c=109/57 les/c/f=110/58/0 sis=111) [2] r=0 lpr=111 pi=[57,111)/1 crt=60'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:01:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:01:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:01:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:01:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:01:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:01:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:01:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:01:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:01:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:01:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:01:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:01:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:01:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:01:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:01:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:01:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:01:43 np0005603787 podman[101960]: 2026-01-31 10:01:43.954571909 +0000 UTC m=+0.032533036 container create f4daff465c3c9abf88005d09dd10a3d91d23970d19535546b709e05566c47008 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_joliot, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:01:43 np0005603787 systemd[1]: Started libpod-conmon-f4daff465c3c9abf88005d09dd10a3d91d23970d19535546b709e05566c47008.scope.
Jan 31 05:01:44 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:01:44 np0005603787 podman[101960]: 2026-01-31 10:01:44.021380022 +0000 UTC m=+0.099341159 container init f4daff465c3c9abf88005d09dd10a3d91d23970d19535546b709e05566c47008 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_joliot, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 05:01:44 np0005603787 podman[101960]: 2026-01-31 10:01:44.026040346 +0000 UTC m=+0.104001473 container start f4daff465c3c9abf88005d09dd10a3d91d23970d19535546b709e05566c47008 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 05:01:44 np0005603787 systemd[1]: libpod-f4daff465c3c9abf88005d09dd10a3d91d23970d19535546b709e05566c47008.scope: Deactivated successfully.
Jan 31 05:01:44 np0005603787 condescending_joliot[101976]: 167 167
Jan 31 05:01:44 np0005603787 conmon[101976]: conmon f4daff465c3c9abf8800 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f4daff465c3c9abf88005d09dd10a3d91d23970d19535546b709e05566c47008.scope/container/memory.events
Jan 31 05:01:44 np0005603787 podman[101960]: 2026-01-31 10:01:44.032524268 +0000 UTC m=+0.110485395 container attach f4daff465c3c9abf88005d09dd10a3d91d23970d19535546b709e05566c47008 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_joliot, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:01:44 np0005603787 podman[101960]: 2026-01-31 10:01:44.033011812 +0000 UTC m=+0.110972939 container died f4daff465c3c9abf88005d09dd10a3d91d23970d19535546b709e05566c47008 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 05:01:44 np0005603787 podman[101960]: 2026-01-31 10:01:43.939769785 +0000 UTC m=+0.017730912 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:01:44 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Jan 31 05:01:44 np0005603787 systemd[1]: var-lib-containers-storage-overlay-4ce299d180cddfff7beef0113aae637d50038b25388b845a3b25bbcb02eca34d-merged.mount: Deactivated successfully.
Jan 31 05:01:44 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Jan 31 05:01:44 np0005603787 podman[101960]: 2026-01-31 10:01:44.093133948 +0000 UTC m=+0.171095085 container remove f4daff465c3c9abf88005d09dd10a3d91d23970d19535546b709e05566c47008 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:01:44 np0005603787 systemd[1]: libpod-conmon-f4daff465c3c9abf88005d09dd10a3d91d23970d19535546b709e05566c47008.scope: Deactivated successfully.
Jan 31 05:01:44 np0005603787 podman[102000]: 2026-01-31 10:01:44.221961008 +0000 UTC m=+0.045610022 container create 258f08224f914594e78bad0f68c49f1b07d30c1a105e0cd715e1d6d4b08c851c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_hertz, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:01:44 np0005603787 systemd[1]: Started libpod-conmon-258f08224f914594e78bad0f68c49f1b07d30c1a105e0cd715e1d6d4b08c851c.scope.
Jan 31 05:01:44 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:01:44 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c16d15a6e2385a5e5c84db7025c198c561d4fa3c248c256fbb8a5e348e401b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:01:44 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c16d15a6e2385a5e5c84db7025c198c561d4fa3c248c256fbb8a5e348e401b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:01:44 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c16d15a6e2385a5e5c84db7025c198c561d4fa3c248c256fbb8a5e348e401b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:01:44 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c16d15a6e2385a5e5c84db7025c198c561d4fa3c248c256fbb8a5e348e401b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:01:44 np0005603787 podman[102000]: 2026-01-31 10:01:44.204915045 +0000 UTC m=+0.028564079 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:01:44 np0005603787 podman[102000]: 2026-01-31 10:01:44.308472934 +0000 UTC m=+0.132121988 container init 258f08224f914594e78bad0f68c49f1b07d30c1a105e0cd715e1d6d4b08c851c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_hertz, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default)
Jan 31 05:01:44 np0005603787 podman[102000]: 2026-01-31 10:01:44.315960374 +0000 UTC m=+0.139609408 container start 258f08224f914594e78bad0f68c49f1b07d30c1a105e0cd715e1d6d4b08c851c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 05:01:44 np0005603787 podman[102000]: 2026-01-31 10:01:44.319758935 +0000 UTC m=+0.143408199 container attach 258f08224f914594e78bad0f68c49f1b07d30c1a105e0cd715e1d6d4b08c851c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_hertz, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:01:44 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 31 05:01:44 np0005603787 lvm[102096]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:01:44 np0005603787 lvm[102095]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:01:44 np0005603787 lvm[102096]: VG ceph_vg1 finished
Jan 31 05:01:44 np0005603787 lvm[102095]: VG ceph_vg0 finished
Jan 31 05:01:44 np0005603787 lvm[102098]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:01:44 np0005603787 lvm[102098]: VG ceph_vg2 finished
Jan 31 05:01:45 np0005603787 nice_hertz[102017]: {}
Jan 31 05:01:45 np0005603787 systemd[1]: libpod-258f08224f914594e78bad0f68c49f1b07d30c1a105e0cd715e1d6d4b08c851c.scope: Deactivated successfully.
Jan 31 05:01:45 np0005603787 systemd[1]: libpod-258f08224f914594e78bad0f68c49f1b07d30c1a105e0cd715e1d6d4b08c851c.scope: Consumed 1.115s CPU time.
Jan 31 05:01:45 np0005603787 podman[102000]: 2026-01-31 10:01:45.046771777 +0000 UTC m=+0.870420791 container died 258f08224f914594e78bad0f68c49f1b07d30c1a105e0cd715e1d6d4b08c851c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_hertz, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:01:45 np0005603787 systemd[1]: var-lib-containers-storage-overlay-0c16d15a6e2385a5e5c84db7025c198c561d4fa3c248c256fbb8a5e348e401b3-merged.mount: Deactivated successfully.
Jan 31 05:01:45 np0005603787 podman[102000]: 2026-01-31 10:01:45.09848014 +0000 UTC m=+0.922129164 container remove 258f08224f914594e78bad0f68c49f1b07d30c1a105e0cd715e1d6d4b08c851c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Jan 31 05:01:45 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v224: 305 pgs: 1 peering, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 91 B/s, 1 objects/s recovering
Jan 31 05:01:45 np0005603787 systemd[1]: libpod-conmon-258f08224f914594e78bad0f68c49f1b07d30c1a105e0cd715e1d6d4b08c851c.scope: Deactivated successfully.
Jan 31 05:01:45 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:01:45 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:01:45 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:01:45 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:01:46 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:01:46 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:01:46 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:01:47 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v225: 305 pgs: 1 peering, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 1 objects/s recovering
Jan 31 05:01:47 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Jan 31 05:01:47 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Jan 31 05:01:48 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Jan 31 05:01:48 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Jan 31 05:01:49 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Jan 31 05:01:49 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Jan 31 05:01:49 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v226: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 55 B/s, 1 objects/s recovering
Jan 31 05:01:49 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Jan 31 05:01:49 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Jan 31 05:01:49 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Jan 31 05:01:49 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Jan 31 05:01:49 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 31 05:01:49 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Jan 31 05:01:49 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Jan 31 05:01:49 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 113 pg[9.1c( v 60'487 (0'0,60'487] local-lis/les=86/87 n=6 ec=49/33 lis/c=86/86 les/c/f=87/87/0 sis=113 pruub=8.224711418s) [0] r=-1 lpr=113 pi=[86,113)/1 crt=60'487 active pruub 200.125595093s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:01:49 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 113 pg[9.1c( v 60'487 (0'0,60'487] local-lis/les=86/87 n=6 ec=49/33 lis/c=86/86 les/c/f=87/87/0 sis=113 pruub=8.224658012s) [0] r=-1 lpr=113 pi=[86,113)/1 crt=60'487 unknown NOTIFY pruub 200.125595093s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:01:49 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 113 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=86/86 les/c/f=87/87/0 sis=113) [0] r=0 lpr=113 pi=[86,113)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:01:50 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Jan 31 05:01:50 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Jan 31 05:01:50 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Jan 31 05:01:50 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 114 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=86/86 les/c/f=87/87/0 sis=114) [0]/[2] r=-1 lpr=114 pi=[86,114)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:01:50 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 114 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=86/86 les/c/f=87/87/0 sis=114) [0]/[2] r=-1 lpr=114 pi=[86,114)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:01:50 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 31 05:01:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 114 pg[9.1c( v 60'487 (0'0,60'487] local-lis/les=86/87 n=6 ec=49/33 lis/c=86/86 les/c/f=87/87/0 sis=114) [0]/[2] r=0 lpr=114 pi=[86,114)/1 crt=60'487 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:01:50 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 114 pg[9.1c( v 60'487 (0'0,60'487] local-lis/les=86/87 n=6 ec=49/33 lis/c=86/86 les/c/f=87/87/0 sis=114) [0]/[2] r=0 lpr=114 pi=[86,114)/1 crt=60'487 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:01:50 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Jan 31 05:01:51 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Jan 31 05:01:51 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v229: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:01:51 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Jan 31 05:01:51 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Jan 31 05:01:51 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Jan 31 05:01:51 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Jan 31 05:01:51 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 31 05:01:51 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Jan 31 05:01:51 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Jan 31 05:01:51 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:01:51 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 115 pg[9.1c( v 60'487 (0'0,60'487] local-lis/les=114/115 n=6 ec=49/33 lis/c=86/86 les/c/f=87/87/0 sis=114) [0]/[2] async=[0] r=0 lpr=114 pi=[86,114)/1 crt=60'487 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:01:51 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Jan 31 05:01:51 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Jan 31 05:01:51 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Jan 31 05:01:51 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Jan 31 05:01:52 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Jan 31 05:01:52 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Jan 31 05:01:52 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Jan 31 05:01:52 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 116 pg[9.1c( v 60'487 (0'0,60'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=114/86 les/c/f=115/87/0 sis=116) [0] r=0 lpr=116 pi=[86,116)/1 pct=0'0 crt=60'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:01:52 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 116 pg[9.1c( v 60'487 (0'0,60'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=114/86 les/c/f=115/87/0 sis=116) [0] r=0 lpr=116 pi=[86,116)/1 crt=60'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:01:52 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 116 pg[9.1c( v 60'487 (0'0,60'487] local-lis/les=114/115 n=6 ec=49/33 lis/c=114/86 les/c/f=115/87/0 sis=116 pruub=15.291157722s) [0] async=[0] r=-1 lpr=116 pi=[86,116)/1 crt=60'487 active pruub 210.382354736s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:01:52 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 116 pg[9.1c( v 60'487 (0'0,60'487] local-lis/les=114/115 n=6 ec=49/33 lis/c=114/86 les/c/f=115/87/0 sis=116 pruub=15.290963173s) [0] r=-1 lpr=116 pi=[86,116)/1 crt=60'487 unknown NOTIFY pruub 210.382354736s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:01:52 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 31 05:01:53 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v232: 305 pgs: 1 peering, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 104 B/s, 2 objects/s recovering
Jan 31 05:01:53 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Jan 31 05:01:53 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Jan 31 05:01:53 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Jan 31 05:01:53 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 117 pg[9.1c( v 60'487 (0'0,60'487] local-lis/les=116/117 n=6 ec=49/33 lis/c=114/86 les/c/f=115/87/0 sis=116) [0] r=0 lpr=116 pi=[86,116)/1 crt=60'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:01:53 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Jan 31 05:01:53 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Jan 31 05:01:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:01:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:01:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:01:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:01:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:01:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:01:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:01:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:01:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:01:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:01:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:01:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:01:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1719723981114704e-06 of space, bias 4.0, pg target 0.0014063668777337644 quantized to 16 (current 16)
Jan 31 05:01:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:01:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:01:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:01:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:01:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:01:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.387758839617113e-06 of space, bias 1.0, pg target 0.0013163276518851337 quantized to 32 (current 32)
Jan 31 05:01:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:01:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:01:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:01:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:01:54 np0005603787 python3.9[102290]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:01:54 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 3.f scrub starts
Jan 31 05:01:54 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 3.f scrub ok
Jan 31 05:01:55 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v234: 305 pgs: 1 peering, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 85 B/s, 1 objects/s recovering
Jan 31 05:01:55 np0005603787 python3.9[102577]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 31 05:01:55 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Jan 31 05:01:55 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Jan 31 05:01:55 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Jan 31 05:01:55 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Jan 31 05:01:56 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:01:56 np0005603787 python3.9[102729]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 31 05:01:56 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Jan 31 05:01:56 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Jan 31 05:01:57 np0005603787 python3.9[102881]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:01:57 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v235: 305 pgs: 1 peering, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 1 objects/s recovering
Jan 31 05:01:57 np0005603787 python3.9[103033]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 31 05:01:57 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Jan 31 05:01:57 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Jan 31 05:01:58 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Jan 31 05:01:58 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Jan 31 05:01:58 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Jan 31 05:01:58 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Jan 31 05:01:59 np0005603787 python3.9[103185]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:01:59 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v236: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 53 B/s, 1 objects/s recovering
Jan 31 05:01:59 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Jan 31 05:01:59 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Jan 31 05:01:59 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Jan 31 05:01:59 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Jan 31 05:01:59 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 31 05:01:59 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Jan 31 05:01:59 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Jan 31 05:01:59 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 118 pg[9.1e( v 60'485 (0'0,60'485] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118 pruub=8.963656425s) [0] r=-1 lpr=118 pi=[69,118)/1 crt=60'485 active pruub 211.166427612s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:01:59 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 118 pg[9.1e( v 60'485 (0'0,60'485] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118 pruub=8.963604927s) [0] r=-1 lpr=118 pi=[69,118)/1 crt=60'485 unknown NOTIFY pruub 211.166427612s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:01:59 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 118 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [0] r=0 lpr=118 pi=[69,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:01:59 np0005603787 python3.9[103337]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:01:59 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Jan 31 05:01:59 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Jan 31 05:02:00 np0005603787 python3.9[103415]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:02:00 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Jan 31 05:02:00 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 31 05:02:00 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Jan 31 05:02:00 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Jan 31 05:02:00 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 119 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [0]/[2] r=-1 lpr=119 pi=[69,119)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:02:00 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 119 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [0]/[2] r=-1 lpr=119 pi=[69,119)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:02:00 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 119 pg[9.1e( v 60'485 (0'0,60'485] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [0]/[2] r=0 lpr=119 pi=[69,119)/1 crt=60'485 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:02:00 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 119 pg[9.1e( v 60'485 (0'0,60'485] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [0]/[2] r=0 lpr=119 pi=[69,119)/1 crt=60'485 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:02:00 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Jan 31 05:02:00 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Jan 31 05:02:01 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Jan 31 05:02:01 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Jan 31 05:02:01 np0005603787 python3.9[103567]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:02:01 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v239: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:02:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 05:02:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 05:02:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:02:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Jan 31 05:02:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 05:02:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Jan 31 05:02:01 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 05:02:01 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Jan 31 05:02:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 120 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=74/75 n=6 ec=49/33 lis/c=74/74 les/c/f=75/75/0 sis=120 pruub=14.959450722s) [1] r=-1 lpr=120 pi=[74,120)/1 crt=39'483 active pruub 219.203079224s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:02:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 120 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=74/75 n=6 ec=49/33 lis/c=74/74 les/c/f=75/75/0 sis=120 pruub=14.959301949s) [1] r=-1 lpr=120 pi=[74,120)/1 crt=39'483 unknown NOTIFY pruub 219.203079224s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:02:01 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 120 pg[9.1e( v 60'485 (0'0,60'485] local-lis/les=119/120 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [0]/[2] async=[0] r=0 lpr=119 pi=[69,119)/1 crt=60'485 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:02:01 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 120 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=74/74 les/c/f=75/75/0 sis=120) [1] r=0 lpr=120 pi=[74,120)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:02:01 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Jan 31 05:02:01 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Jan 31 05:02:01 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Jan 31 05:02:01 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Jan 31 05:02:02 np0005603787 python3.9[103721]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 31 05:02:02 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Jan 31 05:02:02 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Jan 31 05:02:02 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Jan 31 05:02:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=74/75 n=6 ec=49/33 lis/c=74/74 les/c/f=75/75/0 sis=121) [1]/[2] r=0 lpr=121 pi=[74,121)/1 crt=39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:02:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=74/75 n=6 ec=49/33 lis/c=74/74 les/c/f=75/75/0 sis=121) [1]/[2] r=0 lpr=121 pi=[74,121)/1 crt=39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:02:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 121 pg[9.1e( v 60'485 (0'0,60'485] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121 pruub=14.965559959s) [0] async=[0] r=-1 lpr=121 pi=[69,121)/1 crt=60'485 active pruub 220.247024536s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:02:02 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 121 pg[9.1e( v 60'485 (0'0,60'485] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121 pruub=14.965309143s) [0] r=-1 lpr=121 pi=[69,121)/1 crt=60'485 unknown NOTIFY pruub 220.247024536s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:02:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 121 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=74/74 les/c/f=75/75/0 sis=121) [1]/[2] r=-1 lpr=121 pi=[74,121)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:02:02 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 121 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=74/74 les/c/f=75/75/0 sis=121) [1]/[2] r=-1 lpr=121 pi=[74,121)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:02:02 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 05:02:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 121 pg[9.1e( v 60'485 (0'0,60'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [0] r=0 lpr=121 pi=[69,121)/1 pct=0'0 crt=60'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:02:02 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 121 pg[9.1e( v 60'485 (0'0,60'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [0] r=0 lpr=121 pi=[69,121)/1 crt=60'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:02:02 np0005603787 python3.9[103875]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 31 05:02:03 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v242: 305 pgs: 1 peering, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 37 B/s, 1 objects/s recovering
Jan 31 05:02:03 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Jan 31 05:02:03 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Jan 31 05:02:03 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Jan 31 05:02:03 np0005603787 ceph-osd[85879]: osd.0 pg_epoch: 122 pg[9.1e( v 60'485 (0'0,60'485] local-lis/les=121/122 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [0] r=0 lpr=121 pi=[69,121)/1 crt=60'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:02:03 np0005603787 python3.9[104028]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 05:02:03 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Jan 31 05:02:03 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 122 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=121/122 n=6 ec=49/33 lis/c=74/74 les/c/f=75/75/0 sis=121) [1]/[2] async=[1] r=0 lpr=121 pi=[74,121)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:02:03 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Jan 31 05:02:03 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Jan 31 05:02:04 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Jan 31 05:02:04 np0005603787 python3.9[104180]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 31 05:02:04 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Jan 31 05:02:04 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Jan 31 05:02:04 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Jan 31 05:02:04 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 123 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=121/74 les/c/f=122/75/0 sis=123) [1] r=0 lpr=123 pi=[74,123)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:02:04 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 123 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=121/74 les/c/f=122/75/0 sis=123) [1] r=0 lpr=123 pi=[74,123)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:02:04 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 123 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=121/122 n=6 ec=49/33 lis/c=121/74 les/c/f=122/75/0 sis=123 pruub=15.363371849s) [1] async=[1] r=-1 lpr=123 pi=[74,123)/1 crt=39'483 active pruub 222.674468994s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:02:04 np0005603787 ceph-osd[87996]: osd.2 pg_epoch: 123 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=121/122 n=6 ec=49/33 lis/c=121/74 les/c/f=122/75/0 sis=123 pruub=15.363290787s) [1] r=-1 lpr=123 pi=[74,123)/1 crt=39'483 unknown NOTIFY pruub 222.674468994s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:02:04 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 8.b scrub starts
Jan 31 05:02:04 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 8.b scrub ok
Jan 31 05:02:05 np0005603787 python3.9[104332]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 05:02:05 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v245: 305 pgs: 1 remapped+peering, 1 peering, 303 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 37 B/s, 1 objects/s recovering
Jan 31 05:02:05 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Jan 31 05:02:05 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Jan 31 05:02:05 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Jan 31 05:02:05 np0005603787 ceph-osd[86934]: osd.1 pg_epoch: 124 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=123/124 n=6 ec=49/33 lis/c=121/74 les/c/f=122/75/0 sis=123) [1] r=0 lpr=123 pi=[74,123)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:02:05 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Jan 31 05:02:05 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Jan 31 05:02:05 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Jan 31 05:02:05 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Jan 31 05:02:06 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:02:06 np0005603787 python3.9[104485]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:02:06 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Jan 31 05:02:06 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Jan 31 05:02:07 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v247: 305 pgs: 2 peering, 303 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 57 B/s, 2 objects/s recovering
Jan 31 05:02:07 np0005603787 python3.9[104637]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:02:07 np0005603787 python3.9[104715]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:02:07 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Jan 31 05:02:07 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Jan 31 05:02:07 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Jan 31 05:02:07 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Jan 31 05:02:08 np0005603787 python3.9[104867]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:02:08 np0005603787 python3.9[104945]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:02:08 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 2.d scrub starts
Jan 31 05:02:08 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 2.d scrub ok
Jan 31 05:02:09 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v248: 305 pgs: 1 peering, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Jan 31 05:02:09 np0005603787 python3.9[105097]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 05:02:10 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Jan 31 05:02:10 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Jan 31 05:02:10 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 5.f scrub starts
Jan 31 05:02:10 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 5.f scrub ok
Jan 31 05:02:11 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v249: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 0 objects/s recovering
Jan 31 05:02:11 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:02:11 np0005603787 python3.9[105248]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:02:11 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Jan 31 05:02:11 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Jan 31 05:02:12 np0005603787 python3.9[105400]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 31 05:02:12 np0005603787 python3.9[105550]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:02:12 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Jan 31 05:02:12 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Jan 31 05:02:12 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Jan 31 05:02:12 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Jan 31 05:02:13 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v250: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 0 objects/s recovering
Jan 31 05:02:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:02:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:02:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:02:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:02:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:02:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:02:13 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 10.b scrub starts
Jan 31 05:02:13 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 7.c scrub starts
Jan 31 05:02:13 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 10.b scrub ok
Jan 31 05:02:13 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 7.c scrub ok
Jan 31 05:02:14 np0005603787 python3.9[105702]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:02:14 np0005603787 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 31 05:02:14 np0005603787 systemd[1]: tuned.service: Deactivated successfully.
Jan 31 05:02:14 np0005603787 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 31 05:02:14 np0005603787 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 31 05:02:14 np0005603787 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 31 05:02:14 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Jan 31 05:02:14 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Jan 31 05:02:15 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Jan 31 05:02:15 np0005603787 python3.9[105863]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 31 05:02:15 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Jan 31 05:02:15 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v251: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Jan 31 05:02:15 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Jan 31 05:02:15 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Jan 31 05:02:16 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:02:17 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v252: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:02:17 np0005603787 python3.9[106015]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:02:17 np0005603787 python3.9[106169]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:02:17 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Jan 31 05:02:17 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Jan 31 05:02:17 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Jan 31 05:02:17 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Jan 31 05:02:18 np0005603787 systemd[1]: session-35.scope: Deactivated successfully.
Jan 31 05:02:18 np0005603787 systemd[1]: session-35.scope: Consumed 1min 1.477s CPU time.
Jan 31 05:02:18 np0005603787 systemd-logind[786]: Session 35 logged out. Waiting for processes to exit.
Jan 31 05:02:18 np0005603787 systemd-logind[786]: Removed session 35.
Jan 31 05:02:18 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Jan 31 05:02:18 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Jan 31 05:02:19 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 5.c scrub starts
Jan 31 05:02:19 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 5.c scrub ok
Jan 31 05:02:19 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:02:19 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Jan 31 05:02:19 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Jan 31 05:02:19 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 2.a scrub starts
Jan 31 05:02:20 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 2.a scrub ok
Jan 31 05:02:21 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v254: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:02:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:02:21 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Jan 31 05:02:21 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Jan 31 05:02:22 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 11.d scrub starts
Jan 31 05:02:22 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 11.d scrub ok
Jan 31 05:02:22 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 3.c scrub starts
Jan 31 05:02:22 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 3.c scrub ok
Jan 31 05:02:23 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v255: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:02:23 np0005603787 systemd-logind[786]: New session 36 of user zuul.
Jan 31 05:02:23 np0005603787 systemd[1]: Started Session 36 of User zuul.
Jan 31 05:02:23 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Jan 31 05:02:23 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Jan 31 05:02:24 np0005603787 python3.9[106349]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:02:24 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Jan 31 05:02:24 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Jan 31 05:02:25 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:02:25 np0005603787 python3.9[106505]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 31 05:02:25 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 10.f scrub starts
Jan 31 05:02:25 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 10.f scrub ok
Jan 31 05:02:26 np0005603787 python3.9[106658]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 05:02:26 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:02:26 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Jan 31 05:02:26 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Jan 31 05:02:27 np0005603787 python3.9[106742]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 05:02:27 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v257: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:02:27 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Jan 31 05:02:27 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Jan 31 05:02:28 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Jan 31 05:02:28 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Jan 31 05:02:29 np0005603787 python3.9[106895]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 05:02:29 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v258: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:02:29 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Jan 31 05:02:29 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Jan 31 05:02:29 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Jan 31 05:02:29 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Jan 31 05:02:30 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Jan 31 05:02:30 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Jan 31 05:02:31 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v259: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:02:31 np0005603787 python3.9[107048]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 05:02:31 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:02:32 np0005603787 python3.9[107201]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:02:32 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Jan 31 05:02:32 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Jan 31 05:02:32 np0005603787 python3.9[107353]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 31 05:02:32 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Jan 31 05:02:32 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Jan 31 05:02:33 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v260: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:02:33 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 11.b scrub starts
Jan 31 05:02:33 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 11.b scrub ok
Jan 31 05:02:33 np0005603787 python3.9[107503]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:02:34 np0005603787 python3.9[107661]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 05:02:34 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Jan 31 05:02:34 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Jan 31 05:02:35 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v261: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:02:35 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Jan 31 05:02:35 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Jan 31 05:02:36 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:02:36 np0005603787 python3.9[107814]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:02:36 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Jan 31 05:02:36 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Jan 31 05:02:36 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Jan 31 05:02:36 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Jan 31 05:02:36 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Jan 31 05:02:36 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Jan 31 05:02:37 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v262: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:02:38 np0005603787 python3.9[108101]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 31 05:02:38 np0005603787 python3.9[108251]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:02:38 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 4.e scrub starts
Jan 31 05:02:38 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 4.e scrub ok
Jan 31 05:02:39 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:02:39 np0005603787 python3.9[108405]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 05:02:40 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Jan 31 05:02:40 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Jan 31 05:02:40 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Jan 31 05:02:40 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Jan 31 05:02:40 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Jan 31 05:02:40 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Jan 31 05:02:41 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v264: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:02:41 np0005603787 python3.9[108558]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 05:02:41 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:02:41 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Jan 31 05:02:41 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Jan 31 05:02:41 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 11.e scrub starts
Jan 31 05:02:41 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 11.e scrub ok
Jan 31 05:02:42 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Jan 31 05:02:42 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Jan 31 05:02:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_10:02:43
Jan 31 05:02:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:02:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] do_upmap
Jan 31 05:02:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'default.rgw.control', 'volumes', 'default.rgw.meta', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'images']
Jan 31 05:02:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:02:43 np0005603787 python3.9[108711]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:02:43 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v265: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:02:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:02:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:02:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:02:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:02:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:02:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:02:43 np0005603787 python3.9[108865]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Jan 31 05:02:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:02:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:02:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:02:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:02:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:02:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:02:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:02:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:02:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:02:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:02:44 np0005603787 systemd[1]: session-36.scope: Deactivated successfully.
Jan 31 05:02:44 np0005603787 systemd[1]: session-36.scope: Consumed 15.957s CPU time.
Jan 31 05:02:44 np0005603787 systemd-logind[786]: Session 36 logged out. Waiting for processes to exit.
Jan 31 05:02:44 np0005603787 systemd-logind[786]: Removed session 36.
Jan 31 05:02:45 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:02:45 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:02:45 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:02:45 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:02:45 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:02:45 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:02:45 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:02:45 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:02:45 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:02:45 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:02:45 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:02:45 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:02:45 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:02:45 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Jan 31 05:02:45 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Jan 31 05:02:46 np0005603787 podman[109030]: 2026-01-31 10:02:46.071727264 +0000 UTC m=+0.054587535 container create 28bd55825694f2b3d22ae22bc8342000435bb5e15685186b5ac9198353245ad3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_turing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:02:46 np0005603787 systemd[1]: Started libpod-conmon-28bd55825694f2b3d22ae22bc8342000435bb5e15685186b5ac9198353245ad3.scope.
Jan 31 05:02:46 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:02:46 np0005603787 podman[109030]: 2026-01-31 10:02:46.139776053 +0000 UTC m=+0.122636354 container init 28bd55825694f2b3d22ae22bc8342000435bb5e15685186b5ac9198353245ad3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_turing, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 31 05:02:46 np0005603787 podman[109030]: 2026-01-31 10:02:46.051674763 +0000 UTC m=+0.034535124 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:02:46 np0005603787 podman[109030]: 2026-01-31 10:02:46.14532614 +0000 UTC m=+0.128186401 container start 28bd55825694f2b3d22ae22bc8342000435bb5e15685186b5ac9198353245ad3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:02:46 np0005603787 podman[109030]: 2026-01-31 10:02:46.148317219 +0000 UTC m=+0.131177520 container attach 28bd55825694f2b3d22ae22bc8342000435bb5e15685186b5ac9198353245ad3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_turing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:02:46 np0005603787 quirky_turing[109046]: 167 167
Jan 31 05:02:46 np0005603787 systemd[1]: libpod-28bd55825694f2b3d22ae22bc8342000435bb5e15685186b5ac9198353245ad3.scope: Deactivated successfully.
Jan 31 05:02:46 np0005603787 podman[109030]: 2026-01-31 10:02:46.150493537 +0000 UTC m=+0.133353808 container died 28bd55825694f2b3d22ae22bc8342000435bb5e15685186b5ac9198353245ad3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_turing, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 05:02:46 np0005603787 systemd[1]: var-lib-containers-storage-overlay-6edcbf2f6b19cb88161fb9b5c68b8f08121049c4b1242e1eee789ffb0c770888-merged.mount: Deactivated successfully.
Jan 31 05:02:46 np0005603787 podman[109030]: 2026-01-31 10:02:46.184722702 +0000 UTC m=+0.167582973 container remove 28bd55825694f2b3d22ae22bc8342000435bb5e15685186b5ac9198353245ad3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle)
Jan 31 05:02:46 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:02:46 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:02:46 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:02:46 np0005603787 systemd[1]: libpod-conmon-28bd55825694f2b3d22ae22bc8342000435bb5e15685186b5ac9198353245ad3.scope: Deactivated successfully.
Jan 31 05:02:46 np0005603787 podman[109071]: 2026-01-31 10:02:46.283739761 +0000 UTC m=+0.031436713 container create 2298ac2c0f1cf757fadd619438431f3ec8e93fddd364feb035b6c1b7e9458cf5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_chebyshev, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 05:02:46 np0005603787 systemd[1]: Started libpod-conmon-2298ac2c0f1cf757fadd619438431f3ec8e93fddd364feb035b6c1b7e9458cf5.scope.
Jan 31 05:02:46 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:02:46 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41ae484b9f8ae4fc5843b9988b7d1048013e2dc05be75509550f5b67e9e26406/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:02:46 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41ae484b9f8ae4fc5843b9988b7d1048013e2dc05be75509550f5b67e9e26406/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:02:46 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41ae484b9f8ae4fc5843b9988b7d1048013e2dc05be75509550f5b67e9e26406/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:02:46 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41ae484b9f8ae4fc5843b9988b7d1048013e2dc05be75509550f5b67e9e26406/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:02:46 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41ae484b9f8ae4fc5843b9988b7d1048013e2dc05be75509550f5b67e9e26406/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:02:46 np0005603787 podman[109071]: 2026-01-31 10:02:46.270197233 +0000 UTC m=+0.017894185 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:02:46 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:02:46 np0005603787 podman[109071]: 2026-01-31 10:02:46.470545802 +0000 UTC m=+0.218242784 container init 2298ac2c0f1cf757fadd619438431f3ec8e93fddd364feb035b6c1b7e9458cf5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_chebyshev, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:02:46 np0005603787 podman[109071]: 2026-01-31 10:02:46.476564642 +0000 UTC m=+0.224261594 container start 2298ac2c0f1cf757fadd619438431f3ec8e93fddd364feb035b6c1b7e9458cf5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_chebyshev, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:02:46 np0005603787 podman[109071]: 2026-01-31 10:02:46.515227754 +0000 UTC m=+0.262924706 container attach 2298ac2c0f1cf757fadd619438431f3ec8e93fddd364feb035b6c1b7e9458cf5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_chebyshev, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 05:02:46 np0005603787 reverent_chebyshev[109087]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:02:46 np0005603787 reverent_chebyshev[109087]: --> All data devices are unavailable
Jan 31 05:02:46 np0005603787 systemd[1]: libpod-2298ac2c0f1cf757fadd619438431f3ec8e93fddd364feb035b6c1b7e9458cf5.scope: Deactivated successfully.
Jan 31 05:02:46 np0005603787 podman[109071]: 2026-01-31 10:02:46.886162185 +0000 UTC m=+0.633859167 container died 2298ac2c0f1cf757fadd619438431f3ec8e93fddd364feb035b6c1b7e9458cf5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_chebyshev, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3)
Jan 31 05:02:46 np0005603787 systemd[1]: var-lib-containers-storage-overlay-41ae484b9f8ae4fc5843b9988b7d1048013e2dc05be75509550f5b67e9e26406-merged.mount: Deactivated successfully.
Jan 31 05:02:46 np0005603787 podman[109071]: 2026-01-31 10:02:46.926852352 +0000 UTC m=+0.674549304 container remove 2298ac2c0f1cf757fadd619438431f3ec8e93fddd364feb035b6c1b7e9458cf5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_chebyshev, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 05:02:46 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 8.d scrub starts
Jan 31 05:02:46 np0005603787 systemd[1]: libpod-conmon-2298ac2c0f1cf757fadd619438431f3ec8e93fddd364feb035b6c1b7e9458cf5.scope: Deactivated successfully.
Jan 31 05:02:46 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 8.d scrub ok
Jan 31 05:02:47 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v267: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:02:47 np0005603787 podman[109180]: 2026-01-31 10:02:47.297143237 +0000 UTC m=+0.033274942 container create 6ecd15c187ceb3d94ae6055f5ae4dda786e42cedeb7758725e2d83040c12fbc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_lewin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:02:47 np0005603787 systemd[1]: Started libpod-conmon-6ecd15c187ceb3d94ae6055f5ae4dda786e42cedeb7758725e2d83040c12fbc8.scope.
Jan 31 05:02:47 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:02:47 np0005603787 podman[109180]: 2026-01-31 10:02:47.367790565 +0000 UTC m=+0.103922370 container init 6ecd15c187ceb3d94ae6055f5ae4dda786e42cedeb7758725e2d83040c12fbc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_lewin, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 05:02:47 np0005603787 podman[109180]: 2026-01-31 10:02:47.374262446 +0000 UTC m=+0.110394151 container start 6ecd15c187ceb3d94ae6055f5ae4dda786e42cedeb7758725e2d83040c12fbc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:02:47 np0005603787 great_lewin[109196]: 167 167
Jan 31 05:02:47 np0005603787 podman[109180]: 2026-01-31 10:02:47.281254356 +0000 UTC m=+0.017386071 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:02:47 np0005603787 systemd[1]: libpod-6ecd15c187ceb3d94ae6055f5ae4dda786e42cedeb7758725e2d83040c12fbc8.scope: Deactivated successfully.
Jan 31 05:02:47 np0005603787 podman[109180]: 2026-01-31 10:02:47.379103524 +0000 UTC m=+0.115235279 container attach 6ecd15c187ceb3d94ae6055f5ae4dda786e42cedeb7758725e2d83040c12fbc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_lewin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True)
Jan 31 05:02:47 np0005603787 podman[109180]: 2026-01-31 10:02:47.384817315 +0000 UTC m=+0.120949060 container died 6ecd15c187ceb3d94ae6055f5ae4dda786e42cedeb7758725e2d83040c12fbc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_lewin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:02:47 np0005603787 systemd[1]: var-lib-containers-storage-overlay-ecc183d272e7e918a28faf471c41b2f5b1b34536bf3fae7c4406b0f19085fa3d-merged.mount: Deactivated successfully.
Jan 31 05:02:47 np0005603787 podman[109180]: 2026-01-31 10:02:47.433324188 +0000 UTC m=+0.169455933 container remove 6ecd15c187ceb3d94ae6055f5ae4dda786e42cedeb7758725e2d83040c12fbc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_lewin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 05:02:47 np0005603787 systemd[1]: libpod-conmon-6ecd15c187ceb3d94ae6055f5ae4dda786e42cedeb7758725e2d83040c12fbc8.scope: Deactivated successfully.
Jan 31 05:02:47 np0005603787 podman[109222]: 2026-01-31 10:02:47.573558657 +0000 UTC m=+0.041517318 container create 3f8609d68c532a68b603b3fdfd31dadbf91cb2c0160df7a4ed75efe6b0da9a17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True)
Jan 31 05:02:47 np0005603787 systemd[1]: Started libpod-conmon-3f8609d68c532a68b603b3fdfd31dadbf91cb2c0160df7a4ed75efe6b0da9a17.scope.
Jan 31 05:02:47 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:02:47 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38a973ac6346d5aff528cccddef09d78eaff9b5d709c065687757590f828b605/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:02:47 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38a973ac6346d5aff528cccddef09d78eaff9b5d709c065687757590f828b605/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:02:47 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38a973ac6346d5aff528cccddef09d78eaff9b5d709c065687757590f828b605/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:02:47 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38a973ac6346d5aff528cccddef09d78eaff9b5d709c065687757590f828b605/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:02:47 np0005603787 podman[109222]: 2026-01-31 10:02:47.630032471 +0000 UTC m=+0.097991152 container init 3f8609d68c532a68b603b3fdfd31dadbf91cb2c0160df7a4ed75efe6b0da9a17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_turing, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:02:47 np0005603787 podman[109222]: 2026-01-31 10:02:47.634856469 +0000 UTC m=+0.102815130 container start 3f8609d68c532a68b603b3fdfd31dadbf91cb2c0160df7a4ed75efe6b0da9a17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_turing, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 05:02:47 np0005603787 podman[109222]: 2026-01-31 10:02:47.638520316 +0000 UTC m=+0.106479057 container attach 3f8609d68c532a68b603b3fdfd31dadbf91cb2c0160df7a4ed75efe6b0da9a17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_turing, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 05:02:47 np0005603787 podman[109222]: 2026-01-31 10:02:47.551430872 +0000 UTC m=+0.019389573 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:02:47 np0005603787 gifted_turing[109238]: {
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:    "0": [
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:        {
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:            "devices": [
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "/dev/loop3"
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:            ],
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:            "lv_name": "ceph_lv0",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:            "lv_size": "21470642176",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:            "name": "ceph_lv0",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:            "tags": {
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "ceph.cluster_name": "ceph",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "ceph.crush_device_class": "",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "ceph.encrypted": "0",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "ceph.objectstore": "bluestore",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "ceph.osd_id": "0",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "ceph.type": "block",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "ceph.vdo": "0",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "ceph.with_tpm": "0"
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:            },
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:            "type": "block",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:            "vg_name": "ceph_vg0"
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:        }
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:    ],
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:    "1": [
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:        {
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:            "devices": [
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "/dev/loop4"
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:            ],
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:            "lv_name": "ceph_lv1",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:            "lv_size": "21470642176",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:            "name": "ceph_lv1",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:            "tags": {
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "ceph.cluster_name": "ceph",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "ceph.crush_device_class": "",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "ceph.encrypted": "0",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "ceph.objectstore": "bluestore",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "ceph.osd_id": "1",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "ceph.type": "block",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "ceph.vdo": "0",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "ceph.with_tpm": "0"
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:            },
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:            "type": "block",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:            "vg_name": "ceph_vg1"
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:        }
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:    ],
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:    "2": [
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:        {
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:            "devices": [
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "/dev/loop5"
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:            ],
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:            "lv_name": "ceph_lv2",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:            "lv_size": "21470642176",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:            "name": "ceph_lv2",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:            "tags": {
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "ceph.cluster_name": "ceph",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "ceph.crush_device_class": "",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "ceph.encrypted": "0",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "ceph.objectstore": "bluestore",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "ceph.osd_id": "2",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "ceph.type": "block",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "ceph.vdo": "0",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:                "ceph.with_tpm": "0"
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:            },
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:            "type": "block",
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:            "vg_name": "ceph_vg2"
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:        }
Jan 31 05:02:47 np0005603787 gifted_turing[109238]:    ]
Jan 31 05:02:47 np0005603787 gifted_turing[109238]: }
Jan 31 05:02:47 np0005603787 systemd[1]: libpod-3f8609d68c532a68b603b3fdfd31dadbf91cb2c0160df7a4ed75efe6b0da9a17.scope: Deactivated successfully.
Jan 31 05:02:47 np0005603787 podman[109222]: 2026-01-31 10:02:47.899423877 +0000 UTC m=+0.367382538 container died 3f8609d68c532a68b603b3fdfd31dadbf91cb2c0160df7a4ed75efe6b0da9a17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_turing, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:02:47 np0005603787 systemd[1]: var-lib-containers-storage-overlay-38a973ac6346d5aff528cccddef09d78eaff9b5d709c065687757590f828b605-merged.mount: Deactivated successfully.
Jan 31 05:02:47 np0005603787 podman[109222]: 2026-01-31 10:02:47.936914489 +0000 UTC m=+0.404873150 container remove 3f8609d68c532a68b603b3fdfd31dadbf91cb2c0160df7a4ed75efe6b0da9a17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_turing, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 05:02:47 np0005603787 systemd[1]: libpod-conmon-3f8609d68c532a68b603b3fdfd31dadbf91cb2c0160df7a4ed75efe6b0da9a17.scope: Deactivated successfully.
Jan 31 05:02:48 np0005603787 podman[109321]: 2026-01-31 10:02:48.309526455 +0000 UTC m=+0.028402102 container create 5b8616217e014c131ed9063687261bfa0f76342eda5016a79e2b2c7560d69893 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_jepsen, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:02:48 np0005603787 systemd[1]: Started libpod-conmon-5b8616217e014c131ed9063687261bfa0f76342eda5016a79e2b2c7560d69893.scope.
Jan 31 05:02:48 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:02:48 np0005603787 podman[109321]: 2026-01-31 10:02:48.385128734 +0000 UTC m=+0.104004431 container init 5b8616217e014c131ed9063687261bfa0f76342eda5016a79e2b2c7560d69893 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_jepsen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 05:02:48 np0005603787 podman[109321]: 2026-01-31 10:02:48.392251203 +0000 UTC m=+0.111126850 container start 5b8616217e014c131ed9063687261bfa0f76342eda5016a79e2b2c7560d69893 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_jepsen, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:02:48 np0005603787 podman[109321]: 2026-01-31 10:02:48.297167097 +0000 UTC m=+0.016042764 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:02:48 np0005603787 heuristic_jepsen[109338]: 167 167
Jan 31 05:02:48 np0005603787 systemd[1]: libpod-5b8616217e014c131ed9063687261bfa0f76342eda5016a79e2b2c7560d69893.scope: Deactivated successfully.
Jan 31 05:02:48 np0005603787 podman[109321]: 2026-01-31 10:02:48.395786886 +0000 UTC m=+0.114662573 container attach 5b8616217e014c131ed9063687261bfa0f76342eda5016a79e2b2c7560d69893 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_jepsen, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:02:48 np0005603787 podman[109321]: 2026-01-31 10:02:48.396215518 +0000 UTC m=+0.115091175 container died 5b8616217e014c131ed9063687261bfa0f76342eda5016a79e2b2c7560d69893 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_jepsen, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 05:02:48 np0005603787 systemd[1]: var-lib-containers-storage-overlay-c43aafef1b7045891bfd99418db334c08fdbd1c54c79c3084fa65ca194f3b83f-merged.mount: Deactivated successfully.
Jan 31 05:02:48 np0005603787 podman[109321]: 2026-01-31 10:02:48.448254394 +0000 UTC m=+0.167130041 container remove 5b8616217e014c131ed9063687261bfa0f76342eda5016a79e2b2c7560d69893 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_jepsen, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:02:48 np0005603787 systemd[1]: libpod-conmon-5b8616217e014c131ed9063687261bfa0f76342eda5016a79e2b2c7560d69893.scope: Deactivated successfully.
Jan 31 05:02:48 np0005603787 podman[109362]: 2026-01-31 10:02:48.576934988 +0000 UTC m=+0.032330646 container create 4191cad2f7695af6eb108beb7d1e30b9241b9be2ade51bb9dff17f7226a7cc42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_neumann, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:02:48 np0005603787 systemd[1]: Started libpod-conmon-4191cad2f7695af6eb108beb7d1e30b9241b9be2ade51bb9dff17f7226a7cc42.scope.
Jan 31 05:02:48 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:02:48 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68f03b72b23a90c588d73e27e82a0f974d2f05df59d69eba3f363bb5939c1d82/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:02:48 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68f03b72b23a90c588d73e27e82a0f974d2f05df59d69eba3f363bb5939c1d82/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:02:48 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68f03b72b23a90c588d73e27e82a0f974d2f05df59d69eba3f363bb5939c1d82/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:02:48 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68f03b72b23a90c588d73e27e82a0f974d2f05df59d69eba3f363bb5939c1d82/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:02:48 np0005603787 podman[109362]: 2026-01-31 10:02:48.562207048 +0000 UTC m=+0.017602706 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:02:48 np0005603787 podman[109362]: 2026-01-31 10:02:48.668057138 +0000 UTC m=+0.123452816 container init 4191cad2f7695af6eb108beb7d1e30b9241b9be2ade51bb9dff17f7226a7cc42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 05:02:48 np0005603787 podman[109362]: 2026-01-31 10:02:48.674101047 +0000 UTC m=+0.129496725 container start 4191cad2f7695af6eb108beb7d1e30b9241b9be2ade51bb9dff17f7226a7cc42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_neumann, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 05:02:48 np0005603787 podman[109362]: 2026-01-31 10:02:48.67795976 +0000 UTC m=+0.133355458 container attach 4191cad2f7695af6eb108beb7d1e30b9241b9be2ade51bb9dff17f7226a7cc42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_neumann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:02:48 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Jan 31 05:02:48 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Jan 31 05:02:49 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:02:49 np0005603787 lvm[109457]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:02:49 np0005603787 lvm[109458]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:02:49 np0005603787 lvm[109457]: VG ceph_vg0 finished
Jan 31 05:02:49 np0005603787 lvm[109458]: VG ceph_vg1 finished
Jan 31 05:02:49 np0005603787 lvm[109460]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:02:49 np0005603787 lvm[109460]: VG ceph_vg2 finished
Jan 31 05:02:49 np0005603787 systemd[76537]: Created slice User Background Tasks Slice.
Jan 31 05:02:49 np0005603787 systemd[76537]: Starting Cleanup of User's Temporary Files and Directories...
Jan 31 05:02:49 np0005603787 systemd[76537]: Finished Cleanup of User's Temporary Files and Directories.
Jan 31 05:02:49 np0005603787 peaceful_neumann[109379]: {}
Jan 31 05:02:49 np0005603787 systemd[1]: libpod-4191cad2f7695af6eb108beb7d1e30b9241b9be2ade51bb9dff17f7226a7cc42.scope: Deactivated successfully.
Jan 31 05:02:49 np0005603787 podman[109362]: 2026-01-31 10:02:49.401487348 +0000 UTC m=+0.856883006 container died 4191cad2f7695af6eb108beb7d1e30b9241b9be2ade51bb9dff17f7226a7cc42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_neumann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 05:02:49 np0005603787 systemd[1]: libpod-4191cad2f7695af6eb108beb7d1e30b9241b9be2ade51bb9dff17f7226a7cc42.scope: Consumed 1.015s CPU time.
Jan 31 05:02:49 np0005603787 systemd[1]: var-lib-containers-storage-overlay-68f03b72b23a90c588d73e27e82a0f974d2f05df59d69eba3f363bb5939c1d82-merged.mount: Deactivated successfully.
Jan 31 05:02:49 np0005603787 podman[109362]: 2026-01-31 10:02:49.436471453 +0000 UTC m=+0.891867101 container remove 4191cad2f7695af6eb108beb7d1e30b9241b9be2ade51bb9dff17f7226a7cc42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:02:49 np0005603787 systemd[1]: libpod-conmon-4191cad2f7695af6eb108beb7d1e30b9241b9be2ade51bb9dff17f7226a7cc42.scope: Deactivated successfully.
Jan 31 05:02:49 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:02:49 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:02:49 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:02:49 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:02:49 np0005603787 systemd-logind[786]: New session 37 of user zuul.
Jan 31 05:02:49 np0005603787 systemd[1]: Started Session 37 of User zuul.
Jan 31 05:02:49 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Jan 31 05:02:49 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Jan 31 05:02:50 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:02:50 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:02:50 np0005603787 python3.9[109654]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:02:50 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Jan 31 05:02:50 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Jan 31 05:02:50 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 8.e scrub starts
Jan 31 05:02:50 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 8.e scrub ok
Jan 31 05:02:51 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:02:51 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:02:51 np0005603787 python3.9[109808]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 05:02:51 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Jan 31 05:02:52 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Jan 31 05:02:52 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Jan 31 05:02:52 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Jan 31 05:02:52 np0005603787 python3.9[110001]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:02:52 np0005603787 systemd[1]: session-37.scope: Deactivated successfully.
Jan 31 05:02:52 np0005603787 systemd[1]: session-37.scope: Consumed 1.931s CPU time.
Jan 31 05:02:52 np0005603787 systemd-logind[786]: Session 37 logged out. Waiting for processes to exit.
Jan 31 05:02:52 np0005603787 systemd-logind[786]: Removed session 37.
Jan 31 05:02:53 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 11.f scrub starts
Jan 31 05:02:53 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 11.f scrub ok
Jan 31 05:02:53 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Jan 31 05:02:53 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Jan 31 05:02:53 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v270: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:02:53 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 4.a scrub starts
Jan 31 05:02:53 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 4.a scrub ok
Jan 31 05:02:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:02:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:02:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:02:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:02:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:02:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:02:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:02:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:02:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:02:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:02:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:02:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:02:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1786947556520692e-06 of space, bias 4.0, pg target 0.0014144337067824831 quantized to 16 (current 16)
Jan 31 05:02:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:02:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:02:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:02:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:02:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:02:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 05:02:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:02:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:02:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:02:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:02:54 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 3.a scrub starts
Jan 31 05:02:55 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 3.a scrub ok
Jan 31 05:02:55 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:02:56 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Jan 31 05:02:56 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Jan 31 05:02:56 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:02:57 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v272: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:02:57 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 7.e scrub starts
Jan 31 05:02:57 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 7.e scrub ok
Jan 31 05:02:58 np0005603787 systemd-logind[786]: New session 38 of user zuul.
Jan 31 05:02:58 np0005603787 systemd[1]: Started Session 38 of User zuul.
Jan 31 05:02:59 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:02:59 np0005603787 python3.9[110181]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:03:00 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Jan 31 05:03:00 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Jan 31 05:03:00 np0005603787 python3.9[110335]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:03:01 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Jan 31 05:03:01 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Jan 31 05:03:01 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Jan 31 05:03:01 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Jan 31 05:03:01 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:03:20 np0005603787 python3.9[113543]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 05:03:20 np0005603787 rsyslogd[1002]: imjournal: 171 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Jan 31 05:03:21 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v284: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:03:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:03:22 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Jan 31 05:03:22 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Jan 31 05:03:22 np0005603787 python3.9[113696]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 31 05:03:23 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:03:23 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 4.f scrub starts
Jan 31 05:03:23 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 4.f scrub ok
Jan 31 05:03:23 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Jan 31 05:03:23 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Jan 31 05:03:23 np0005603787 python3.9[113848]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:03:23 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Jan 31 05:03:23 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Jan 31 05:03:24 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 8.c scrub starts
Jan 31 05:03:24 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 8.c scrub ok
Jan 31 05:03:24 np0005603787 python3.9[113926]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:03:24 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Jan 31 05:03:24 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Jan 31 05:03:24 np0005603787 python3.9[114078]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:03:25 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:03:25 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Jan 31 05:03:25 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Jan 31 05:03:25 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Jan 31 05:03:25 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Jan 31 05:03:25 np0005603787 python3.9[114156]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:03:26 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 4.d scrub starts
Jan 31 05:03:26 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 4.d scrub ok
Jan 31 05:03:26 np0005603787 python3.9[114308]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:03:26 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:03:27 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:03:27 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Jan 31 05:03:27 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Jan 31 05:03:27 np0005603787 python3.9[114460]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 05:03:28 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Jan 31 05:03:28 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Jan 31 05:03:28 np0005603787 python3.9[114544]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:03:28 np0005603787 systemd[1]: session-38.scope: Deactivated successfully.
Jan 31 05:03:28 np0005603787 systemd[1]: session-38.scope: Consumed 20.475s CPU time.
Jan 31 05:03:28 np0005603787 systemd-logind[786]: Session 38 logged out. Waiting for processes to exit.
Jan 31 05:03:28 np0005603787 systemd-logind[786]: Removed session 38.
Jan 31 05:03:29 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:03:29 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Jan 31 05:03:29 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Jan 31 05:03:30 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Jan 31 05:03:30 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Jan 31 05:03:30 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Jan 31 05:03:30 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Jan 31 05:03:31 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Jan 31 05:03:31 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Jan 31 05:03:31 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:03:31 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Jan 31 05:03:31 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Jan 31 05:03:31 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:03:31 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Jan 31 05:03:31 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Jan 31 05:03:33 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Jan 31 05:03:33 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Jan 31 05:03:33 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:03:34 np0005603787 systemd-logind[786]: New session 39 of user zuul.
Jan 31 05:03:34 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Jan 31 05:03:34 np0005603787 systemd[1]: Started Session 39 of User zuul.
Jan 31 05:03:34 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Jan 31 05:03:34 np0005603787 python3.9[114727]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:03:35 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v291: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:03:35 np0005603787 python3.9[114879]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:03:35 np0005603787 python3.9[114957]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:03:36 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Jan 31 05:03:36 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Jan 31 05:03:36 np0005603787 systemd[1]: session-39.scope: Deactivated successfully.
Jan 31 05:03:36 np0005603787 systemd[1]: session-39.scope: Consumed 1.255s CPU time.
Jan 31 05:03:36 np0005603787 systemd-logind[786]: Session 39 logged out. Waiting for processes to exit.
Jan 31 05:03:36 np0005603787 systemd-logind[786]: Removed session 39.
Jan 31 05:03:36 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:03:36 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Jan 31 05:03:36 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Jan 31 05:03:37 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:03:39 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:03:40 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Jan 31 05:03:40 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Jan 31 05:03:41 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:03:41 np0005603787 systemd-logind[786]: New session 40 of user zuul.
Jan 31 05:03:41 np0005603787 systemd[1]: Started Session 40 of User zuul.
Jan 31 05:03:41 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Jan 31 05:03:41 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Jan 31 05:03:41 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:03:42 np0005603787 python3.9[115135]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:03:43 np0005603787 python3.9[115291]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:03:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_10:03:43
Jan 31 05:03:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:03:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] do_upmap
Jan 31 05:03:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'vms', 'volumes', 'backups', '.mgr', 'default.rgw.log']
Jan 31 05:03:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:03:43 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:03:43 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Jan 31 05:03:43 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Jan 31 05:03:43 np0005603787 python3.9[115466]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:03:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:03:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:03:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:03:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:03:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:03:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:03:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:03:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:03:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:03:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:03:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:03:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:03:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:03:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:03:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:03:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:03:44 np0005603787 python3.9[115544]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.ncj74rea recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:03:44 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Jan 31 05:03:44 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Jan 31 05:03:44 np0005603787 python3.9[115696]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:03:45 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:03:45 np0005603787 python3.9[115774]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.cqn5vohu recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:03:45 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Jan 31 05:03:45 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Jan 31 05:03:45 np0005603787 python3.9[115926]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:03:46 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Jan 31 05:03:46 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Jan 31 05:03:46 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:03:46 np0005603787 python3.9[116078]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:03:47 np0005603787 python3.9[116156]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:03:47 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:03:47 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Jan 31 05:03:47 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Jan 31 05:03:47 np0005603787 python3.9[116308]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:03:47 np0005603787 python3.9[116386]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:03:48 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Jan 31 05:03:48 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Jan 31 05:03:48 np0005603787 python3.9[116538]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:03:49 np0005603787 python3.9[116690]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:03:49 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:03:49 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Jan 31 05:03:49 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Jan 31 05:03:49 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Jan 31 05:03:49 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Jan 31 05:03:49 np0005603787 python3.9[116768]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:03:50 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:03:50 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:03:50 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:03:50 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:03:50 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:03:50 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:03:50 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:03:50 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:03:50 np0005603787 python3.9[116988]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:03:50 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:03:50 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:03:50 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:03:50 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:03:50 np0005603787 podman[117143]: 2026-01-31 10:03:50.420356116 +0000 UTC m=+0.035752900 container create c9ef156d19503fc4dc5034de890702cc81558df29c1c00d2cf8e36571cc42c62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_germain, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:03:50 np0005603787 systemd[1]: Started libpod-conmon-c9ef156d19503fc4dc5034de890702cc81558df29c1c00d2cf8e36571cc42c62.scope.
Jan 31 05:03:50 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Jan 31 05:03:50 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:03:50 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Jan 31 05:03:50 np0005603787 podman[117143]: 2026-01-31 10:03:50.498718647 +0000 UTC m=+0.114115451 container init c9ef156d19503fc4dc5034de890702cc81558df29c1c00d2cf8e36571cc42c62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_germain, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 05:03:50 np0005603787 podman[117143]: 2026-01-31 10:03:50.404416653 +0000 UTC m=+0.019813457 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:03:50 np0005603787 podman[117143]: 2026-01-31 10:03:50.506822653 +0000 UTC m=+0.122219437 container start c9ef156d19503fc4dc5034de890702cc81558df29c1c00d2cf8e36571cc42c62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_germain, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 05:03:50 np0005603787 elated_germain[117160]: 167 167
Jan 31 05:03:50 np0005603787 systemd[1]: libpod-c9ef156d19503fc4dc5034de890702cc81558df29c1c00d2cf8e36571cc42c62.scope: Deactivated successfully.
Jan 31 05:03:50 np0005603787 podman[117143]: 2026-01-31 10:03:50.511968279 +0000 UTC m=+0.127365093 container attach c9ef156d19503fc4dc5034de890702cc81558df29c1c00d2cf8e36571cc42c62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:03:50 np0005603787 python3.9[117130]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:03:50 np0005603787 conmon[117160]: conmon c9ef156d19503fc4dc50 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c9ef156d19503fc4dc5034de890702cc81558df29c1c00d2cf8e36571cc42c62.scope/container/memory.events
Jan 31 05:03:50 np0005603787 podman[117143]: 2026-01-31 10:03:50.51388111 +0000 UTC m=+0.129277894 container died c9ef156d19503fc4dc5034de890702cc81558df29c1c00d2cf8e36571cc42c62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_germain, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:03:50 np0005603787 systemd[1]: var-lib-containers-storage-overlay-3f3293cc24d377e9bde16a2c77ed291497df7b70beb2e358446133010c492ae6-merged.mount: Deactivated successfully.
Jan 31 05:03:50 np0005603787 podman[117143]: 2026-01-31 10:03:50.574541801 +0000 UTC m=+0.189938585 container remove c9ef156d19503fc4dc5034de890702cc81558df29c1c00d2cf8e36571cc42c62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:03:50 np0005603787 systemd[1]: libpod-conmon-c9ef156d19503fc4dc5034de890702cc81558df29c1c00d2cf8e36571cc42c62.scope: Deactivated successfully.
Jan 31 05:03:50 np0005603787 podman[117214]: 2026-01-31 10:03:50.683283589 +0000 UTC m=+0.038275827 container create d1b396b407419addca5e3bc4b6710778a73e700375aad9a3abd2a316f5b21ea0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_rhodes, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:03:50 np0005603787 systemd[1]: Started libpod-conmon-d1b396b407419addca5e3bc4b6710778a73e700375aad9a3abd2a316f5b21ea0.scope.
Jan 31 05:03:50 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:03:50 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef8cd3b68aff1e57821e8f24ecfbb3e465fabd46c89cd4c5224b9856b2f1739/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:03:50 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef8cd3b68aff1e57821e8f24ecfbb3e465fabd46c89cd4c5224b9856b2f1739/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:03:50 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef8cd3b68aff1e57821e8f24ecfbb3e465fabd46c89cd4c5224b9856b2f1739/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:03:50 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef8cd3b68aff1e57821e8f24ecfbb3e465fabd46c89cd4c5224b9856b2f1739/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:03:50 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef8cd3b68aff1e57821e8f24ecfbb3e465fabd46c89cd4c5224b9856b2f1739/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:03:50 np0005603787 podman[117214]: 2026-01-31 10:03:50.665900467 +0000 UTC m=+0.020892715 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:03:50 np0005603787 podman[117214]: 2026-01-31 10:03:50.776374061 +0000 UTC m=+0.131366309 container init d1b396b407419addca5e3bc4b6710778a73e700375aad9a3abd2a316f5b21ea0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_rhodes, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:03:50 np0005603787 podman[117214]: 2026-01-31 10:03:50.783304005 +0000 UTC m=+0.138296233 container start d1b396b407419addca5e3bc4b6710778a73e700375aad9a3abd2a316f5b21ea0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 05:03:50 np0005603787 podman[117214]: 2026-01-31 10:03:50.787315502 +0000 UTC m=+0.142307740 container attach d1b396b407419addca5e3bc4b6710778a73e700375aad9a3abd2a316f5b21ea0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_rhodes, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 05:03:50 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:03:50 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:03:50 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:03:51 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:03:51 np0005603787 magical_rhodes[117278]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:03:51 np0005603787 magical_rhodes[117278]: --> All data devices are unavailable
Jan 31 05:03:51 np0005603787 systemd[1]: libpod-d1b396b407419addca5e3bc4b6710778a73e700375aad9a3abd2a316f5b21ea0.scope: Deactivated successfully.
Jan 31 05:03:51 np0005603787 podman[117214]: 2026-01-31 10:03:51.207507322 +0000 UTC m=+0.562499550 container died d1b396b407419addca5e3bc4b6710778a73e700375aad9a3abd2a316f5b21ea0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_rhodes, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:03:51 np0005603787 systemd[1]: var-lib-containers-storage-overlay-7ef8cd3b68aff1e57821e8f24ecfbb3e465fabd46c89cd4c5224b9856b2f1739-merged.mount: Deactivated successfully.
Jan 31 05:03:51 np0005603787 podman[117214]: 2026-01-31 10:03:51.255741124 +0000 UTC m=+0.610733362 container remove d1b396b407419addca5e3bc4b6710778a73e700375aad9a3abd2a316f5b21ea0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_rhodes, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 05:03:51 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 6.d scrub starts
Jan 31 05:03:51 np0005603787 systemd[1]: libpod-conmon-d1b396b407419addca5e3bc4b6710778a73e700375aad9a3abd2a316f5b21ea0.scope: Deactivated successfully.
Jan 31 05:03:51 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 6.d scrub ok
Jan 31 05:03:51 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:03:51 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Jan 31 05:03:51 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Jan 31 05:03:51 np0005603787 python3.9[117373]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:03:51 np0005603787 systemd[1]: Reloading.
Jan 31 05:03:51 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:03:51 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:03:51 np0005603787 podman[117483]: 2026-01-31 10:03:51.685982371 +0000 UTC m=+0.037411575 container create db4accf185c050f497c34721a3b44dbf929874e04ca0b584bdb739d6febe2726 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_morse, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:03:51 np0005603787 podman[117483]: 2026-01-31 10:03:51.666965826 +0000 UTC m=+0.018395050 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:03:51 np0005603787 systemd[1]: Started libpod-conmon-db4accf185c050f497c34721a3b44dbf929874e04ca0b584bdb739d6febe2726.scope.
Jan 31 05:03:51 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:03:51 np0005603787 podman[117483]: 2026-01-31 10:03:51.862669114 +0000 UTC m=+0.214098418 container init db4accf185c050f497c34721a3b44dbf929874e04ca0b584bdb739d6febe2726 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_morse, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 05:03:51 np0005603787 podman[117483]: 2026-01-31 10:03:51.869724871 +0000 UTC m=+0.221154075 container start db4accf185c050f497c34721a3b44dbf929874e04ca0b584bdb739d6febe2726 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_morse, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 05:03:51 np0005603787 podman[117483]: 2026-01-31 10:03:51.874092617 +0000 UTC m=+0.225521851 container attach db4accf185c050f497c34721a3b44dbf929874e04ca0b584bdb739d6febe2726 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 05:03:51 np0005603787 systemd[1]: libpod-db4accf185c050f497c34721a3b44dbf929874e04ca0b584bdb739d6febe2726.scope: Deactivated successfully.
Jan 31 05:03:51 np0005603787 musing_morse[117501]: 167 167
Jan 31 05:03:51 np0005603787 podman[117483]: 2026-01-31 10:03:51.87646177 +0000 UTC m=+0.227890974 container died db4accf185c050f497c34721a3b44dbf929874e04ca0b584bdb739d6febe2726 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_morse, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:03:51 np0005603787 conmon[117501]: conmon db4accf185c050f497c3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-db4accf185c050f497c34721a3b44dbf929874e04ca0b584bdb739d6febe2726.scope/container/memory.events
Jan 31 05:03:51 np0005603787 systemd[1]: var-lib-containers-storage-overlay-221880977a6979b1d1fbffd73b4a1a76dbb533bc2541553308da5ad04be2dc96-merged.mount: Deactivated successfully.
Jan 31 05:03:51 np0005603787 podman[117483]: 2026-01-31 10:03:51.909355734 +0000 UTC m=+0.260784938 container remove db4accf185c050f497c34721a3b44dbf929874e04ca0b584bdb739d6febe2726 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 05:03:51 np0005603787 systemd[1]: libpod-conmon-db4accf185c050f497c34721a3b44dbf929874e04ca0b584bdb739d6febe2726.scope: Deactivated successfully.
Jan 31 05:03:52 np0005603787 podman[117554]: 2026-01-31 10:03:52.019982323 +0000 UTC m=+0.037249911 container create 857227c67be5c88a48d23655634287a4b93b6aa964af93bd142b3cb0b569ba3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 31 05:03:52 np0005603787 systemd[1]: Started libpod-conmon-857227c67be5c88a48d23655634287a4b93b6aa964af93bd142b3cb0b569ba3a.scope.
Jan 31 05:03:52 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:03:52 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a8fcc5df9e2ed46adec63c3c08a74dc875c443be7e34be0891e6ac3f566934b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:03:52 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a8fcc5df9e2ed46adec63c3c08a74dc875c443be7e34be0891e6ac3f566934b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:03:52 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a8fcc5df9e2ed46adec63c3c08a74dc875c443be7e34be0891e6ac3f566934b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:03:52 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a8fcc5df9e2ed46adec63c3c08a74dc875c443be7e34be0891e6ac3f566934b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:03:52 np0005603787 podman[117554]: 2026-01-31 10:03:52.003209777 +0000 UTC m=+0.020477395 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:03:52 np0005603787 podman[117554]: 2026-01-31 10:03:52.11478949 +0000 UTC m=+0.132057098 container init 857227c67be5c88a48d23655634287a4b93b6aa964af93bd142b3cb0b569ba3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_dewdney, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:03:52 np0005603787 podman[117554]: 2026-01-31 10:03:52.1211679 +0000 UTC m=+0.138435488 container start 857227c67be5c88a48d23655634287a4b93b6aa964af93bd142b3cb0b569ba3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_dewdney, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:03:52 np0005603787 podman[117554]: 2026-01-31 10:03:52.138532761 +0000 UTC m=+0.155800379 container attach 857227c67be5c88a48d23655634287a4b93b6aa964af93bd142b3cb0b569ba3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]: {
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:    "0": [
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:        {
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:            "devices": [
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "/dev/loop3"
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:            ],
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:            "lv_name": "ceph_lv0",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:            "lv_size": "21470642176",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:            "name": "ceph_lv0",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:            "tags": {
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "ceph.cluster_name": "ceph",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "ceph.crush_device_class": "",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "ceph.encrypted": "0",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "ceph.objectstore": "bluestore",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "ceph.osd_id": "0",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "ceph.type": "block",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "ceph.vdo": "0",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "ceph.with_tpm": "0"
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:            },
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:            "type": "block",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:            "vg_name": "ceph_vg0"
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:        }
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:    ],
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:    "1": [
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:        {
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:            "devices": [
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "/dev/loop4"
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:            ],
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:            "lv_name": "ceph_lv1",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:            "lv_size": "21470642176",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:            "name": "ceph_lv1",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:            "tags": {
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "ceph.cluster_name": "ceph",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "ceph.crush_device_class": "",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "ceph.encrypted": "0",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "ceph.objectstore": "bluestore",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "ceph.osd_id": "1",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "ceph.type": "block",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "ceph.vdo": "0",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "ceph.with_tpm": "0"
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:            },
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:            "type": "block",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:            "vg_name": "ceph_vg1"
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:        }
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:    ],
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:    "2": [
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:        {
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:            "devices": [
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "/dev/loop5"
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:            ],
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:            "lv_name": "ceph_lv2",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:            "lv_size": "21470642176",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:            "name": "ceph_lv2",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:            "tags": {
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "ceph.cluster_name": "ceph",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "ceph.crush_device_class": "",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "ceph.encrypted": "0",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "ceph.objectstore": "bluestore",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "ceph.osd_id": "2",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "ceph.type": "block",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "ceph.vdo": "0",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:                "ceph.with_tpm": "0"
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:            },
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:            "type": "block",
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:            "vg_name": "ceph_vg2"
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:        }
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]:    ]
Jan 31 05:03:52 np0005603787 unruffled_dewdney[117619]: }
Jan 31 05:03:52 np0005603787 systemd[1]: libpod-857227c67be5c88a48d23655634287a4b93b6aa964af93bd142b3cb0b569ba3a.scope: Deactivated successfully.
Jan 31 05:03:52 np0005603787 podman[117554]: 2026-01-31 10:03:52.384599246 +0000 UTC m=+0.401866944 container died 857227c67be5c88a48d23655634287a4b93b6aa964af93bd142b3cb0b569ba3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_dewdney, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:03:52 np0005603787 systemd[1]: var-lib-containers-storage-overlay-0a8fcc5df9e2ed46adec63c3c08a74dc875c443be7e34be0891e6ac3f566934b-merged.mount: Deactivated successfully.
Jan 31 05:03:52 np0005603787 python3.9[117699]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:03:52 np0005603787 podman[117554]: 2026-01-31 10:03:52.447047906 +0000 UTC m=+0.464315484 container remove 857227c67be5c88a48d23655634287a4b93b6aa964af93bd142b3cb0b569ba3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_dewdney, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 31 05:03:52 np0005603787 systemd[1]: libpod-conmon-857227c67be5c88a48d23655634287a4b93b6aa964af93bd142b3cb0b569ba3a.scope: Deactivated successfully.
Jan 31 05:03:52 np0005603787 podman[117857]: 2026-01-31 10:03:52.809333387 +0000 UTC m=+0.043867806 container create eb04b0a6b1b6999078640a2c3f9baa452cd5910350f7cacb543f10e6353dfd85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:03:52 np0005603787 python3.9[117843]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:03:52 np0005603787 systemd[1]: Started libpod-conmon-eb04b0a6b1b6999078640a2c3f9baa452cd5910350f7cacb543f10e6353dfd85.scope.
Jan 31 05:03:52 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:03:52 np0005603787 podman[117857]: 2026-01-31 10:03:52.783990514 +0000 UTC m=+0.018524983 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:03:52 np0005603787 podman[117857]: 2026-01-31 10:03:52.895372683 +0000 UTC m=+0.129907092 container init eb04b0a6b1b6999078640a2c3f9baa452cd5910350f7cacb543f10e6353dfd85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 05:03:52 np0005603787 podman[117857]: 2026-01-31 10:03:52.900313674 +0000 UTC m=+0.134848073 container start eb04b0a6b1b6999078640a2c3f9baa452cd5910350f7cacb543f10e6353dfd85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_goodall, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:03:52 np0005603787 exciting_goodall[117873]: 167 167
Jan 31 05:03:52 np0005603787 systemd[1]: libpod-eb04b0a6b1b6999078640a2c3f9baa452cd5910350f7cacb543f10e6353dfd85.scope: Deactivated successfully.
Jan 31 05:03:52 np0005603787 conmon[117873]: conmon eb04b0a6b1b699907864 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eb04b0a6b1b6999078640a2c3f9baa452cd5910350f7cacb543f10e6353dfd85.scope/container/memory.events
Jan 31 05:03:52 np0005603787 podman[117857]: 2026-01-31 10:03:52.908214574 +0000 UTC m=+0.142748983 container attach eb04b0a6b1b6999078640a2c3f9baa452cd5910350f7cacb543f10e6353dfd85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_goodall, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 05:03:52 np0005603787 podman[117857]: 2026-01-31 10:03:52.90884834 +0000 UTC m=+0.143382729 container died eb04b0a6b1b6999078640a2c3f9baa452cd5910350f7cacb543f10e6353dfd85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_goodall, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:03:52 np0005603787 systemd[1]: var-lib-containers-storage-overlay-5b258ce317da23a9bd2aec75d039256e4adb8a405b4d770f07c883e0334d6d72-merged.mount: Deactivated successfully.
Jan 31 05:03:52 np0005603787 podman[117857]: 2026-01-31 10:03:52.94684214 +0000 UTC m=+0.181376529 container remove eb04b0a6b1b6999078640a2c3f9baa452cd5910350f7cacb543f10e6353dfd85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_goodall, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3)
Jan 31 05:03:52 np0005603787 systemd[1]: libpod-conmon-eb04b0a6b1b6999078640a2c3f9baa452cd5910350f7cacb543f10e6353dfd85.scope: Deactivated successfully.
Jan 31 05:03:53 np0005603787 podman[117947]: 2026-01-31 10:03:53.064810913 +0000 UTC m=+0.040760273 container create 855e515eb5e8aa763c244d327fc7b5ec0396eaf334905109c72290c7aca181f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_murdock, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:03:53 np0005603787 systemd[1]: Started libpod-conmon-855e515eb5e8aa763c244d327fc7b5ec0396eaf334905109c72290c7aca181f6.scope.
Jan 31 05:03:53 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:03:53 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe4bca8ee36abdce30452f4b6ff99b8f3629111916b1884b557f7a91ffa9b8f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:03:53 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe4bca8ee36abdce30452f4b6ff99b8f3629111916b1884b557f7a91ffa9b8f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:03:53 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe4bca8ee36abdce30452f4b6ff99b8f3629111916b1884b557f7a91ffa9b8f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:03:53 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe4bca8ee36abdce30452f4b6ff99b8f3629111916b1884b557f7a91ffa9b8f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:03:53 np0005603787 podman[117947]: 2026-01-31 10:03:53.143823151 +0000 UTC m=+0.119772531 container init 855e515eb5e8aa763c244d327fc7b5ec0396eaf334905109c72290c7aca181f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_murdock, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 05:03:53 np0005603787 podman[117947]: 2026-01-31 10:03:53.04740493 +0000 UTC m=+0.023354320 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:03:53 np0005603787 podman[117947]: 2026-01-31 10:03:53.149211315 +0000 UTC m=+0.125160675 container start 855e515eb5e8aa763c244d327fc7b5ec0396eaf334905109c72290c7aca181f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_murdock, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Jan 31 05:03:53 np0005603787 podman[117947]: 2026-01-31 10:03:53.156770625 +0000 UTC m=+0.132720025 container attach 855e515eb5e8aa763c244d327fc7b5ec0396eaf334905109c72290c7aca181f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Jan 31 05:03:53 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:03:53 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Jan 31 05:03:53 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Jan 31 05:03:53 np0005603787 python3.9[118069]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:03:53 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Jan 31 05:03:53 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Jan 31 05:03:53 np0005603787 lvm[118219]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:03:53 np0005603787 lvm[118219]: VG ceph_vg0 finished
Jan 31 05:03:53 np0005603787 lvm[118222]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:03:53 np0005603787 lvm[118222]: VG ceph_vg1 finished
Jan 31 05:03:53 np0005603787 python3.9[118196]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:03:53 np0005603787 lvm[118224]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:03:53 np0005603787 lvm[118224]: VG ceph_vg2 finished
Jan 31 05:03:53 np0005603787 stupefied_murdock[118012]: {}
Jan 31 05:03:53 np0005603787 systemd[1]: libpod-855e515eb5e8aa763c244d327fc7b5ec0396eaf334905109c72290c7aca181f6.scope: Deactivated successfully.
Jan 31 05:03:53 np0005603787 systemd[1]: libpod-855e515eb5e8aa763c244d327fc7b5ec0396eaf334905109c72290c7aca181f6.scope: Consumed 1.053s CPU time.
Jan 31 05:03:53 np0005603787 podman[117947]: 2026-01-31 10:03:53.975624314 +0000 UTC m=+0.951573674 container died 855e515eb5e8aa763c244d327fc7b5ec0396eaf334905109c72290c7aca181f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_murdock, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 05:03:54 np0005603787 systemd[1]: var-lib-containers-storage-overlay-fe4bca8ee36abdce30452f4b6ff99b8f3629111916b1884b557f7a91ffa9b8f2-merged.mount: Deactivated successfully.
Jan 31 05:03:54 np0005603787 podman[117947]: 2026-01-31 10:03:54.023106385 +0000 UTC m=+0.999055745 container remove 855e515eb5e8aa763c244d327fc7b5ec0396eaf334905109c72290c7aca181f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_murdock, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 05:03:54 np0005603787 systemd[1]: libpod-conmon-855e515eb5e8aa763c244d327fc7b5ec0396eaf334905109c72290c7aca181f6.scope: Deactivated successfully.
Jan 31 05:03:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:03:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:03:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:03:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:03:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:03:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:03:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:03:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:03:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:03:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:03:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:03:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:03:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1786947556520692e-06 of space, bias 4.0, pg target 0.0014144337067824831 quantized to 16 (current 16)
Jan 31 05:03:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:03:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:03:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:03:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:03:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:03:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 05:03:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:03:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:03:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:03:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:03:54 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:03:54 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:03:54 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:03:54 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:03:54 np0005603787 python3.9[118418]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:03:54 np0005603787 systemd[1]: Reloading.
Jan 31 05:03:54 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:03:54 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:03:54 np0005603787 systemd[1]: Starting Create netns directory...
Jan 31 05:03:54 np0005603787 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 05:03:54 np0005603787 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 05:03:54 np0005603787 systemd[1]: Finished Create netns directory.
Jan 31 05:03:54 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:03:54 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:03:55 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:03:55 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 6.e scrub starts
Jan 31 05:03:55 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 6.e scrub ok
Jan 31 05:03:55 np0005603787 python3.9[118611]: ansible-ansible.builtin.service_facts Invoked
Jan 31 05:03:55 np0005603787 network[118628]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 05:03:55 np0005603787 network[118629]: 'network-scripts' will be removed from distribution in near future.
Jan 31 05:03:55 np0005603787 network[118630]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 05:03:56 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Jan 31 05:03:56 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Jan 31 05:03:56 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:03:57 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:03:57 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 6.c scrub starts
Jan 31 05:03:57 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 6.c scrub ok
Jan 31 05:03:57 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 10.e scrub starts
Jan 31 05:03:57 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 10.e scrub ok
Jan 31 05:03:57 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Jan 31 05:03:57 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Jan 31 05:03:58 np0005603787 python3.9[118892]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:03:58 np0005603787 python3.9[118970]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:03:59 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:03:59 np0005603787 python3.9[119122]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:04:00 np0005603787 python3.9[119274]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:04:00 np0005603787 python3.9[119352]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:04:00 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Jan 31 05:04:00 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:04:00.932274) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 05:04:00 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Jan 31 05:04:00 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769853840932360, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7272, "num_deletes": 251, "total_data_size": 9862548, "memory_usage": 10025120, "flush_reason": "Manual Compaction"}
Jan 31 05:04:00 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Jan 31 05:04:00 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769853840980879, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7843922, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 146, "largest_seqno": 7415, "table_properties": {"data_size": 7816896, "index_size": 17695, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8261, "raw_key_size": 76239, "raw_average_key_size": 23, "raw_value_size": 7753551, "raw_average_value_size": 2358, "num_data_blocks": 777, "num_entries": 3287, "num_filter_entries": 3287, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769853441, "oldest_key_time": 1769853441, "file_creation_time": 1769853840, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a9067dea-12fb-43c2-8d5c-dbf66227f0e8", "db_session_id": "EXKALWXQ4I64EWVUMKE5", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:04:00 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 48661 microseconds, and 16145 cpu microseconds.
Jan 31 05:04:00 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:04:00.980939) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7843922 bytes OK
Jan 31 05:04:00 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:04:00.980960) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Jan 31 05:04:00 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:04:00.982237) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Jan 31 05:04:00 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:04:00.982253) EVENT_LOG_v1 {"time_micros": 1769853840982249, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Jan 31 05:04:00 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:04:00.982282) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Jan 31 05:04:00 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9830797, prev total WAL file size 9830797, number of live WAL files 2.
Jan 31 05:04:00 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:04:00 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:04:00.983721) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Jan 31 05:04:00 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Jan 31 05:04:00 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7660KB) 13(58KB) 8(1944B)]
Jan 31 05:04:00 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769853840983819, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7905826, "oldest_snapshot_seqno": -1}
Jan 31 05:04:01 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3113 keys, 7858626 bytes, temperature: kUnknown
Jan 31 05:04:01 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769853841039527, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7858626, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7832042, "index_size": 17706, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7813, "raw_key_size": 74679, "raw_average_key_size": 23, "raw_value_size": 7770071, "raw_average_value_size": 2496, "num_data_blocks": 778, "num_entries": 3113, "num_filter_entries": 3113, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769853439, "oldest_key_time": 0, "file_creation_time": 1769853840, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a9067dea-12fb-43c2-8d5c-dbf66227f0e8", "db_session_id": "EXKALWXQ4I64EWVUMKE5", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:04:01 np0005603787 ceph-mon[75160]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 05:04:01 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:04:01.039734) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7858626 bytes
Jan 31 05:04:01 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:04:01.041506) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 141.7 rd, 140.9 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.5, 0.0 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3402, records dropped: 289 output_compression: NoCompression
Jan 31 05:04:01 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:04:01.041522) EVENT_LOG_v1 {"time_micros": 1769853841041514, "job": 4, "event": "compaction_finished", "compaction_time_micros": 55783, "compaction_time_cpu_micros": 11514, "output_level": 6, "num_output_files": 1, "total_output_size": 7858626, "num_input_records": 3402, "num_output_records": 3113, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 05:04:01 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:04:01 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769853841042166, "job": 4, "event": "table_file_deletion", "file_number": 19}
Jan 31 05:04:01 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:04:01 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769853841042207, "job": 4, "event": "table_file_deletion", "file_number": 13}
Jan 31 05:04:01 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:04:01 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769853841042227, "job": 4, "event": "table_file_deletion", "file_number": 8}
Jan 31 05:04:01 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:04:00.983561) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:04:01 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:04:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:04:01 np0005603787 python3.9[119505]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 31 05:04:01 np0005603787 systemd[1]: Starting Time & Date Service...
Jan 31 05:04:01 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 10.d scrub starts
Jan 31 05:04:01 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 10.d scrub ok
Jan 31 05:04:01 np0005603787 systemd[1]: Started Time & Date Service.
Jan 31 05:04:02 np0005603787 python3.9[119661]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:04:02 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 6.b scrub starts
Jan 31 05:04:02 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 6.b scrub ok
Jan 31 05:04:02 np0005603787 python3.9[119813]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:04:03 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:04:03 np0005603787 python3.9[119891]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:04:03 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Jan 31 05:04:03 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Jan 31 05:04:03 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Jan 31 05:04:03 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Jan 31 05:04:03 np0005603787 python3.9[120043]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:04:04 np0005603787 python3.9[120121]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.6y7hkibo recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:04:04 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Jan 31 05:04:04 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Jan 31 05:04:04 np0005603787 python3.9[120273]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:04:05 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:04:05 np0005603787 python3.9[120351]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:04:05 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Jan 31 05:04:05 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Jan 31 05:04:05 np0005603787 python3.9[120503]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:04:06 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:04:06 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Jan 31 05:04:06 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Jan 31 05:04:06 np0005603787 python3[120656]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 05:04:07 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:04:07 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Jan 31 05:04:07 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Jan 31 05:04:07 np0005603787 python3.9[120808]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:04:07 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Jan 31 05:04:07 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Jan 31 05:04:07 np0005603787 python3.9[120886]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:04:08 np0005603787 python3.9[121038]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:04:08 np0005603787 python3.9[121163]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769853847.920157-308-228455748154391/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:04:09 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:04:09 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Jan 31 05:04:09 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Jan 31 05:04:09 np0005603787 python3.9[121315]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:04:09 np0005603787 python3.9[121393]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:04:10 np0005603787 python3.9[121545]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:04:10 np0005603787 python3.9[121623]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:04:11 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:04:11 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:04:11 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 8.f scrub starts
Jan 31 05:04:11 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 8.f scrub ok
Jan 31 05:04:11 np0005603787 python3.9[121775]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:04:12 np0005603787 python3.9[121853]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:04:12 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Jan 31 05:04:12 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Jan 31 05:04:12 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Jan 31 05:04:12 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Jan 31 05:04:12 np0005603787 python3.9[122005]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:04:13 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 9.a scrub starts
Jan 31 05:04:13 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:04:13 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 9.a scrub ok
Jan 31 05:04:13 np0005603787 python3.9[122160]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:04:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:04:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:04:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:04:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:04:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:04:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:04:14 np0005603787 python3.9[122312]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:04:14 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Jan 31 05:04:14 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Jan 31 05:04:14 np0005603787 python3.9[122464]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:04:14 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Jan 31 05:04:14 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Jan 31 05:04:15 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Jan 31 05:04:15 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:04:15 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Jan 31 05:04:15 np0005603787 python3.9[122616]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 05:04:15 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Jan 31 05:04:15 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Jan 31 05:04:15 np0005603787 python3.9[122768]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 05:04:16 np0005603787 systemd[1]: session-40.scope: Deactivated successfully.
Jan 31 05:04:16 np0005603787 systemd[1]: session-40.scope: Consumed 24.449s CPU time.
Jan 31 05:04:16 np0005603787 systemd-logind[786]: Session 40 logged out. Waiting for processes to exit.
Jan 31 05:04:16 np0005603787 systemd-logind[786]: Removed session 40.
Jan 31 05:04:16 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:04:17 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:04:18 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Jan 31 05:04:18 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Jan 31 05:04:19 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:04:19 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Jan 31 05:04:19 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Jan 31 05:04:20 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Jan 31 05:04:20 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Jan 31 05:04:20 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Jan 31 05:04:20 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Jan 31 05:04:21 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:04:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:04:21 np0005603787 systemd-logind[786]: New session 41 of user zuul.
Jan 31 05:04:21 np0005603787 systemd[1]: Started Session 41 of User zuul.
Jan 31 05:04:22 np0005603787 python3.9[122948]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 31 05:04:22 np0005603787 python3.9[123100]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:04:23 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:04:23 np0005603787 python3.9[123254]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Jan 31 05:04:24 np0005603787 python3.9[123406]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.6urc6ux1 follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:04:24 np0005603787 python3.9[123531]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.6urc6ux1 mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769853863.8187628-44-213414222781923/.source.6urc6ux1 _original_basename=.a1hg5uso follow=False checksum=ac1fbd5fc242e576b14be659bf8c36b623ad0f4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:04:25 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:04:25 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Jan 31 05:04:25 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Jan 31 05:04:25 np0005603787 python3.9[123683]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:04:26 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:04:26 np0005603787 python3.9[123835]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1vH7MPTZElmImL3pKNK6rcC7PaBiA/gXXchLJiHq8OhWrBXBDICCBaBd3JU+sJLMp0KfAlpfLJeEqGnLXoDdzfGnNa2s41mFsJIm5PFrKJziX/K2IUIaV+27aPCJSbe4yxAwAPuOrG0UKnLVQXeUE+idlMM/5sJ32u0KOgTFOJfm6gTtyTvjSChIsyea6pjh1Oas8NsEJWPnm7eTWMNUTVper1Mfq2di7Wxl7g2mnQF1f9lZXEpFLYSUOeW/LDcYrt+KmOzwdie7bBa6ut3XLu/GqmXCIdQJivf3YafIEey8HUCoap0CD/67J3TL4GNWYpSLyHZ8+tnyH1o1DUopQcEQq82YPETZbz7m1SZNdkTW7urc/T/YUYXB9OqoZTcdMQTxcBezQtLR6pLwlk79kXmSexhw9XZKt26D7SkxWO3XkJDehe+JOQ283gENR0Bi9xjRNSeLFeZczbM8LgeTOtjsYVWDaSCERMK30es99a43jOHJvgQc8KaYKo9iihc8=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG5e/QGBKdCU0MiCMKtAS5faK6scEANXhee3MrXfRe5T#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGmdw8ziRFF+QShjWCTje17+56t1rJ+wJUoJrhdtL1Gsz/IovFuhm/YW1sC1ANbhgzpetMbHVKF09oEYGtwR+74=#012 create=True mode=0644 path=/tmp/ansible.6urc6ux1 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:04:26 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Jan 31 05:04:26 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Jan 31 05:04:27 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Jan 31 05:04:27 np0005603787 ceph-osd[86934]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Jan 31 05:04:27 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:04:27 np0005603787 python3.9[123987]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.6urc6ux1' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:04:27 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Jan 31 05:04:27 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Jan 31 05:04:28 np0005603787 python3.9[124141]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.6urc6ux1 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:04:28 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 6.f scrub starts
Jan 31 05:04:28 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 6.f scrub ok
Jan 31 05:04:28 np0005603787 systemd[1]: session-41.scope: Deactivated successfully.
Jan 31 05:04:28 np0005603787 systemd[1]: session-41.scope: Consumed 4.002s CPU time.
Jan 31 05:04:28 np0005603787 systemd-logind[786]: Session 41 logged out. Waiting for processes to exit.
Jan 31 05:04:28 np0005603787 systemd-logind[786]: Removed session 41.
Jan 31 05:04:29 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:04:29 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 9.e scrub starts
Jan 31 05:04:29 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 9.e scrub ok
Jan 31 05:04:30 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Jan 31 05:04:30 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Jan 31 05:04:31 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:04:31 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:04:31 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Jan 31 05:04:31 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Jan 31 05:04:31 np0005603787 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 31 05:04:31 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Jan 31 05:04:31 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Jan 31 05:04:32 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Jan 31 05:04:32 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Jan 31 05:04:33 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:04:33 np0005603787 systemd-logind[786]: New session 42 of user zuul.
Jan 31 05:04:33 np0005603787 systemd[1]: Started Session 42 of User zuul.
Jan 31 05:04:34 np0005603787 python3.9[124321]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:04:35 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:04:35 np0005603787 python3.9[124477]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 31 05:04:36 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:04:36 np0005603787 python3.9[124631]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 05:04:36 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Jan 31 05:04:36 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Jan 31 05:04:37 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:04:37 np0005603787 python3.9[124784]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:04:37 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 9.f scrub starts
Jan 31 05:04:37 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 9.f scrub ok
Jan 31 05:04:37 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Jan 31 05:04:37 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Jan 31 05:04:38 np0005603787 python3.9[124937]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:04:38 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 9.b scrub starts
Jan 31 05:04:38 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 9.b scrub ok
Jan 31 05:04:38 np0005603787 python3.9[125089]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:04:39 np0005603787 systemd[1]: session-42.scope: Deactivated successfully.
Jan 31 05:04:39 np0005603787 systemd[1]: session-42.scope: Consumed 3.368s CPU time.
Jan 31 05:04:39 np0005603787 systemd-logind[786]: Session 42 logged out. Waiting for processes to exit.
Jan 31 05:04:39 np0005603787 systemd-logind[786]: Removed session 42.
Jan 31 05:04:39 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:04:40 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 9.c scrub starts
Jan 31 05:04:40 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 9.c scrub ok
Jan 31 05:04:40 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Jan 31 05:04:40 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Jan 31 05:04:41 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:04:41 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:04:42 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Jan 31 05:04:42 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Jan 31 05:04:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_10:04:43
Jan 31 05:04:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:04:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] do_upmap
Jan 31 05:04:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'images', 'volumes', 'vms', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'backups']
Jan 31 05:04:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:04:43 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:04:43 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Jan 31 05:04:43 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Jan 31 05:04:43 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Jan 31 05:04:43 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Jan 31 05:04:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:04:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:04:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:04:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:04:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:04:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:04:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:04:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:04:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:04:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:04:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:04:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:04:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:04:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:04:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:04:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:04:44 np0005603787 systemd-logind[786]: New session 43 of user zuul.
Jan 31 05:04:44 np0005603787 systemd[1]: Started Session 43 of User zuul.
Jan 31 05:04:45 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:04:45 np0005603787 python3.9[125267]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:04:46 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Jan 31 05:04:46 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Jan 31 05:04:46 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:04:46 np0005603787 python3.9[125423]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 05:04:47 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:04:47 np0005603787 python3.9[125507]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 05:04:47 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 9.d scrub starts
Jan 31 05:04:47 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 9.d scrub ok
Jan 31 05:04:49 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:04:49 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Jan 31 05:04:49 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Jan 31 05:04:49 np0005603787 python3.9[125658]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:04:49 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Jan 31 05:04:49 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Jan 31 05:04:50 np0005603787 python3.9[125809]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 05:04:51 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:04:51 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Jan 31 05:04:51 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Jan 31 05:04:51 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:04:51 np0005603787 python3.9[125959]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:04:52 np0005603787 python3.9[126109]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:04:52 np0005603787 systemd[1]: session-43.scope: Deactivated successfully.
Jan 31 05:04:52 np0005603787 systemd[1]: session-43.scope: Consumed 5.177s CPU time.
Jan 31 05:04:52 np0005603787 systemd-logind[786]: Session 43 logged out. Waiting for processes to exit.
Jan 31 05:04:52 np0005603787 systemd-logind[786]: Removed session 43.
Jan 31 05:04:52 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Jan 31 05:04:52 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Jan 31 05:04:53 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:04:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:04:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:04:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:04:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:04:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:04:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:04:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:04:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:04:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:04:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:04:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:04:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:04:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1786947556520692e-06 of space, bias 4.0, pg target 0.0014144337067824831 quantized to 16 (current 16)
Jan 31 05:04:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:04:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:04:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:04:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:04:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:04:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 05:04:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:04:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:04:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:04:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:04:54 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Jan 31 05:04:54 np0005603787 ceph-osd[87996]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Jan 31 05:04:54 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:04:54 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:04:54 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:04:54 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:04:54 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:04:54 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:04:54 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:04:54 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:04:54 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:04:54 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:04:54 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:04:54 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:04:54 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Jan 31 05:04:54 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Jan 31 05:04:55 np0005603787 podman[126279]: 2026-01-31 10:04:55.151556429 +0000 UTC m=+0.042396897 container create 6859e1eac35e8848064ba403c2a284a0141baa6ba1d63171053b46f7e37070bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_kepler, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 05:04:55 np0005603787 systemd[1]: Started libpod-conmon-6859e1eac35e8848064ba403c2a284a0141baa6ba1d63171053b46f7e37070bb.scope.
Jan 31 05:04:55 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:04:55 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:04:55 np0005603787 podman[126279]: 2026-01-31 10:04:55.212149828 +0000 UTC m=+0.102990346 container init 6859e1eac35e8848064ba403c2a284a0141baa6ba1d63171053b46f7e37070bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Jan 31 05:04:55 np0005603787 podman[126279]: 2026-01-31 10:04:55.218976369 +0000 UTC m=+0.109816867 container start 6859e1eac35e8848064ba403c2a284a0141baa6ba1d63171053b46f7e37070bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Jan 31 05:04:55 np0005603787 podman[126279]: 2026-01-31 10:04:55.222695588 +0000 UTC m=+0.113536086 container attach 6859e1eac35e8848064ba403c2a284a0141baa6ba1d63171053b46f7e37070bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:04:55 np0005603787 wonderful_kepler[126296]: 167 167
Jan 31 05:04:55 np0005603787 systemd[1]: libpod-6859e1eac35e8848064ba403c2a284a0141baa6ba1d63171053b46f7e37070bb.scope: Deactivated successfully.
Jan 31 05:04:55 np0005603787 podman[126279]: 2026-01-31 10:04:55.22767931 +0000 UTC m=+0.118519788 container died 6859e1eac35e8848064ba403c2a284a0141baa6ba1d63171053b46f7e37070bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 05:04:55 np0005603787 podman[126279]: 2026-01-31 10:04:55.13730859 +0000 UTC m=+0.028149088 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:04:55 np0005603787 systemd[1]: var-lib-containers-storage-overlay-f415f432e7938c1dad6aae84f8ec7af847a7f200f471ffc9234709d46e5559f6-merged.mount: Deactivated successfully.
Jan 31 05:04:55 np0005603787 podman[126279]: 2026-01-31 10:04:55.266794999 +0000 UTC m=+0.157635467 container remove 6859e1eac35e8848064ba403c2a284a0141baa6ba1d63171053b46f7e37070bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_kepler, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Jan 31 05:04:55 np0005603787 systemd[1]: libpod-conmon-6859e1eac35e8848064ba403c2a284a0141baa6ba1d63171053b46f7e37070bb.scope: Deactivated successfully.
Jan 31 05:04:55 np0005603787 podman[126321]: 2026-01-31 10:04:55.443364538 +0000 UTC m=+0.050085250 container create 2b9bb61f3278017b22ecf821163a50c5b6bc2e941fbf56b34b5977e29b3f61f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 05:04:55 np0005603787 systemd[1]: Started libpod-conmon-2b9bb61f3278017b22ecf821163a50c5b6bc2e941fbf56b34b5977e29b3f61f7.scope.
Jan 31 05:04:55 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:04:55 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:04:55 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:04:55 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:04:55 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93dce4913ce0927c34d6f1c080a60b46a42b762aa14f17e16301eb131636098e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:04:55 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93dce4913ce0927c34d6f1c080a60b46a42b762aa14f17e16301eb131636098e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:04:55 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93dce4913ce0927c34d6f1c080a60b46a42b762aa14f17e16301eb131636098e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:04:55 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93dce4913ce0927c34d6f1c080a60b46a42b762aa14f17e16301eb131636098e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:04:55 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93dce4913ce0927c34d6f1c080a60b46a42b762aa14f17e16301eb131636098e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:04:55 np0005603787 podman[126321]: 2026-01-31 10:04:55.428046742 +0000 UTC m=+0.034767434 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:04:55 np0005603787 podman[126321]: 2026-01-31 10:04:55.544395592 +0000 UTC m=+0.151116384 container init 2b9bb61f3278017b22ecf821163a50c5b6bc2e941fbf56b34b5977e29b3f61f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_curran, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 05:04:55 np0005603787 podman[126321]: 2026-01-31 10:04:55.551031628 +0000 UTC m=+0.157752370 container start 2b9bb61f3278017b22ecf821163a50c5b6bc2e941fbf56b34b5977e29b3f61f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_curran, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:04:55 np0005603787 podman[126321]: 2026-01-31 10:04:55.555305351 +0000 UTC m=+0.162026113 container attach 2b9bb61f3278017b22ecf821163a50c5b6bc2e941fbf56b34b5977e29b3f61f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_curran, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:04:55 np0005603787 wizardly_curran[126338]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:04:55 np0005603787 wizardly_curran[126338]: --> All data devices are unavailable
Jan 31 05:04:55 np0005603787 systemd[1]: libpod-2b9bb61f3278017b22ecf821163a50c5b6bc2e941fbf56b34b5977e29b3f61f7.scope: Deactivated successfully.
Jan 31 05:04:55 np0005603787 podman[126321]: 2026-01-31 10:04:55.991833635 +0000 UTC m=+0.598554337 container died 2b9bb61f3278017b22ecf821163a50c5b6bc2e941fbf56b34b5977e29b3f61f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_curran, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 05:04:56 np0005603787 systemd[1]: var-lib-containers-storage-overlay-93dce4913ce0927c34d6f1c080a60b46a42b762aa14f17e16301eb131636098e-merged.mount: Deactivated successfully.
Jan 31 05:04:56 np0005603787 podman[126321]: 2026-01-31 10:04:56.032778122 +0000 UTC m=+0.639498824 container remove 2b9bb61f3278017b22ecf821163a50c5b6bc2e941fbf56b34b5977e29b3f61f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_curran, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 05:04:56 np0005603787 systemd[1]: libpod-conmon-2b9bb61f3278017b22ecf821163a50c5b6bc2e941fbf56b34b5977e29b3f61f7.scope: Deactivated successfully.
Jan 31 05:04:56 np0005603787 podman[126433]: 2026-01-31 10:04:56.400810337 +0000 UTC m=+0.044736699 container create fe7ce1ba75a786b929e558bd04d664483ed299aa25d267033c27d7e057a75091 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_bassi, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 05:04:56 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:04:56 np0005603787 systemd[1]: Started libpod-conmon-fe7ce1ba75a786b929e558bd04d664483ed299aa25d267033c27d7e057a75091.scope.
Jan 31 05:04:56 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:04:56 np0005603787 podman[126433]: 2026-01-31 10:04:56.469997255 +0000 UTC m=+0.113923597 container init fe7ce1ba75a786b929e558bd04d664483ed299aa25d267033c27d7e057a75091 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_bassi, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:04:56 np0005603787 podman[126433]: 2026-01-31 10:04:56.378191427 +0000 UTC m=+0.022117789 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:04:56 np0005603787 podman[126433]: 2026-01-31 10:04:56.478424219 +0000 UTC m=+0.122350541 container start fe7ce1ba75a786b929e558bd04d664483ed299aa25d267033c27d7e057a75091 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 05:04:56 np0005603787 optimistic_bassi[126450]: 167 167
Jan 31 05:04:56 np0005603787 systemd[1]: libpod-fe7ce1ba75a786b929e558bd04d664483ed299aa25d267033c27d7e057a75091.scope: Deactivated successfully.
Jan 31 05:04:56 np0005603787 podman[126433]: 2026-01-31 10:04:56.481581403 +0000 UTC m=+0.125507775 container attach fe7ce1ba75a786b929e558bd04d664483ed299aa25d267033c27d7e057a75091 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:04:56 np0005603787 podman[126433]: 2026-01-31 10:04:56.482392414 +0000 UTC m=+0.126318796 container died fe7ce1ba75a786b929e558bd04d664483ed299aa25d267033c27d7e057a75091 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 05:04:56 np0005603787 systemd[1]: var-lib-containers-storage-overlay-f63ba8bddcc62a7cf9aad0a7b74a1f79d96a3171f934b53e291606b45c2ecc14-merged.mount: Deactivated successfully.
Jan 31 05:04:56 np0005603787 podman[126433]: 2026-01-31 10:04:56.529610518 +0000 UTC m=+0.173536850 container remove fe7ce1ba75a786b929e558bd04d664483ed299aa25d267033c27d7e057a75091 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 05:04:56 np0005603787 systemd[1]: libpod-conmon-fe7ce1ba75a786b929e558bd04d664483ed299aa25d267033c27d7e057a75091.scope: Deactivated successfully.
Jan 31 05:04:56 np0005603787 podman[126476]: 2026-01-31 10:04:56.664433748 +0000 UTC m=+0.040082605 container create 5005385fd5c82a205734c1db70a61b50999c6bdc970c837aeda5bc85ffcb7a77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_lamarr, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Jan 31 05:04:56 np0005603787 systemd[1]: Started libpod-conmon-5005385fd5c82a205734c1db70a61b50999c6bdc970c837aeda5bc85ffcb7a77.scope.
Jan 31 05:04:56 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:04:56 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e4f101aa2bb2aa20f30fe02ffd18a190d227ad3a7291f6b9e313410a972603c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:04:56 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e4f101aa2bb2aa20f30fe02ffd18a190d227ad3a7291f6b9e313410a972603c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:04:56 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e4f101aa2bb2aa20f30fe02ffd18a190d227ad3a7291f6b9e313410a972603c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:04:56 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e4f101aa2bb2aa20f30fe02ffd18a190d227ad3a7291f6b9e313410a972603c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:04:56 np0005603787 podman[126476]: 2026-01-31 10:04:56.648763482 +0000 UTC m=+0.024412369 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:04:56 np0005603787 podman[126476]: 2026-01-31 10:04:56.756380561 +0000 UTC m=+0.132029458 container init 5005385fd5c82a205734c1db70a61b50999c6bdc970c837aeda5bc85ffcb7a77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 05:04:56 np0005603787 podman[126476]: 2026-01-31 10:04:56.762202305 +0000 UTC m=+0.137851212 container start 5005385fd5c82a205734c1db70a61b50999c6bdc970c837aeda5bc85ffcb7a77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 05:04:56 np0005603787 podman[126476]: 2026-01-31 10:04:56.766699215 +0000 UTC m=+0.142348112 container attach 5005385fd5c82a205734c1db70a61b50999c6bdc970c837aeda5bc85ffcb7a77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_lamarr, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]: {
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:    "0": [
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:        {
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:            "devices": [
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "/dev/loop3"
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:            ],
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:            "lv_name": "ceph_lv0",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:            "lv_size": "21470642176",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:            "name": "ceph_lv0",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:            "tags": {
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "ceph.cluster_name": "ceph",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "ceph.crush_device_class": "",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "ceph.encrypted": "0",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "ceph.objectstore": "bluestore",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "ceph.osd_id": "0",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "ceph.type": "block",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "ceph.vdo": "0",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "ceph.with_tpm": "0"
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:            },
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:            "type": "block",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:            "vg_name": "ceph_vg0"
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:        }
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:    ],
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:    "1": [
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:        {
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:            "devices": [
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "/dev/loop4"
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:            ],
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:            "lv_name": "ceph_lv1",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:            "lv_size": "21470642176",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:            "name": "ceph_lv1",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:            "tags": {
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "ceph.cluster_name": "ceph",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "ceph.crush_device_class": "",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "ceph.encrypted": "0",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "ceph.objectstore": "bluestore",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "ceph.osd_id": "1",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "ceph.type": "block",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "ceph.vdo": "0",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "ceph.with_tpm": "0"
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:            },
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:            "type": "block",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:            "vg_name": "ceph_vg1"
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:        }
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:    ],
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:    "2": [
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:        {
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:            "devices": [
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "/dev/loop5"
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:            ],
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:            "lv_name": "ceph_lv2",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:            "lv_size": "21470642176",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:            "name": "ceph_lv2",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:            "tags": {
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "ceph.cluster_name": "ceph",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "ceph.crush_device_class": "",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "ceph.encrypted": "0",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "ceph.objectstore": "bluestore",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "ceph.osd_id": "2",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "ceph.type": "block",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "ceph.vdo": "0",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:                "ceph.with_tpm": "0"
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:            },
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:            "type": "block",
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:            "vg_name": "ceph_vg2"
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:        }
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]:    ]
Jan 31 05:04:57 np0005603787 gifted_lamarr[126493]: }
Jan 31 05:04:57 np0005603787 systemd[1]: libpod-5005385fd5c82a205734c1db70a61b50999c6bdc970c837aeda5bc85ffcb7a77.scope: Deactivated successfully.
Jan 31 05:04:57 np0005603787 podman[126476]: 2026-01-31 10:04:57.083478369 +0000 UTC m=+0.459127226 container died 5005385fd5c82a205734c1db70a61b50999c6bdc970c837aeda5bc85ffcb7a77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_lamarr, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:04:57 np0005603787 systemd[1]: var-lib-containers-storage-overlay-1e4f101aa2bb2aa20f30fe02ffd18a190d227ad3a7291f6b9e313410a972603c-merged.mount: Deactivated successfully.
Jan 31 05:04:57 np0005603787 podman[126476]: 2026-01-31 10:04:57.120770329 +0000 UTC m=+0.496419186 container remove 5005385fd5c82a205734c1db70a61b50999c6bdc970c837aeda5bc85ffcb7a77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_lamarr, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 05:04:57 np0005603787 systemd[1]: libpod-conmon-5005385fd5c82a205734c1db70a61b50999c6bdc970c837aeda5bc85ffcb7a77.scope: Deactivated successfully.
Jan 31 05:04:57 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:04:57 np0005603787 podman[126576]: 2026-01-31 10:04:57.519813377 +0000 UTC m=+0.037357152 container create 7adf2629c6f0b7fcce19f9498039c68c92689598341c8988ffc747dc1f2bccc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:04:57 np0005603787 systemd[1]: Started libpod-conmon-7adf2629c6f0b7fcce19f9498039c68c92689598341c8988ffc747dc1f2bccc9.scope.
Jan 31 05:04:57 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:04:57 np0005603787 podman[126576]: 2026-01-31 10:04:57.586577641 +0000 UTC m=+0.104121386 container init 7adf2629c6f0b7fcce19f9498039c68c92689598341c8988ffc747dc1f2bccc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:04:57 np0005603787 podman[126576]: 2026-01-31 10:04:57.596143745 +0000 UTC m=+0.113687480 container start 7adf2629c6f0b7fcce19f9498039c68c92689598341c8988ffc747dc1f2bccc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_dijkstra, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 05:04:57 np0005603787 podman[126576]: 2026-01-31 10:04:57.500171475 +0000 UTC m=+0.017715240 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:04:57 np0005603787 musing_dijkstra[126592]: 167 167
Jan 31 05:04:57 np0005603787 systemd[1]: libpod-7adf2629c6f0b7fcce19f9498039c68c92689598341c8988ffc747dc1f2bccc9.scope: Deactivated successfully.
Jan 31 05:04:57 np0005603787 conmon[126592]: conmon 7adf2629c6f0b7fcce19 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7adf2629c6f0b7fcce19f9498039c68c92689598341c8988ffc747dc1f2bccc9.scope/container/memory.events
Jan 31 05:04:57 np0005603787 podman[126576]: 2026-01-31 10:04:57.600604043 +0000 UTC m=+0.118147778 container attach 7adf2629c6f0b7fcce19f9498039c68c92689598341c8988ffc747dc1f2bccc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:04:57 np0005603787 podman[126576]: 2026-01-31 10:04:57.601305442 +0000 UTC m=+0.118849177 container died 7adf2629c6f0b7fcce19f9498039c68c92689598341c8988ffc747dc1f2bccc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 05:04:57 np0005603787 systemd[1]: var-lib-containers-storage-overlay-8ba389cc778f2a9a4604d2a73120545cb065ff2ebd52d909067fd6b7fd07029e-merged.mount: Deactivated successfully.
Jan 31 05:04:57 np0005603787 podman[126576]: 2026-01-31 10:04:57.63324384 +0000 UTC m=+0.150787575 container remove 7adf2629c6f0b7fcce19f9498039c68c92689598341c8988ffc747dc1f2bccc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:04:57 np0005603787 systemd[1]: libpod-conmon-7adf2629c6f0b7fcce19f9498039c68c92689598341c8988ffc747dc1f2bccc9.scope: Deactivated successfully.
Jan 31 05:04:57 np0005603787 podman[126615]: 2026-01-31 10:04:57.73339007 +0000 UTC m=+0.034952679 container create f6761399680a76fc7097aff1ca2a8bcf07588a5e978692b52a083c55e4cf43d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 05:04:57 np0005603787 systemd[1]: Started libpod-conmon-f6761399680a76fc7097aff1ca2a8bcf07588a5e978692b52a083c55e4cf43d1.scope.
Jan 31 05:04:57 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:04:57 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/108b29030ce611cc751844d838f2d8f764385dc6a6f312b26d57f7dc1f2dde81/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:04:57 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/108b29030ce611cc751844d838f2d8f764385dc6a6f312b26d57f7dc1f2dde81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:04:57 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/108b29030ce611cc751844d838f2d8f764385dc6a6f312b26d57f7dc1f2dde81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:04:57 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/108b29030ce611cc751844d838f2d8f764385dc6a6f312b26d57f7dc1f2dde81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:04:57 np0005603787 podman[126615]: 2026-01-31 10:04:57.716637225 +0000 UTC m=+0.018199854 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:04:57 np0005603787 podman[126615]: 2026-01-31 10:04:57.823661487 +0000 UTC m=+0.125224116 container init f6761399680a76fc7097aff1ca2a8bcf07588a5e978692b52a083c55e4cf43d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_franklin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 05:04:57 np0005603787 podman[126615]: 2026-01-31 10:04:57.831887575 +0000 UTC m=+0.133450174 container start f6761399680a76fc7097aff1ca2a8bcf07588a5e978692b52a083c55e4cf43d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_franklin, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 05:04:57 np0005603787 podman[126615]: 2026-01-31 10:04:57.835368739 +0000 UTC m=+0.136931348 container attach f6761399680a76fc7097aff1ca2a8bcf07588a5e978692b52a083c55e4cf43d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_franklin, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 05:04:57 np0005603787 systemd-logind[786]: New session 44 of user zuul.
Jan 31 05:04:57 np0005603787 systemd[1]: Started Session 44 of User zuul.
Jan 31 05:04:58 np0005603787 lvm[126767]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:04:58 np0005603787 lvm[126767]: VG ceph_vg0 finished
Jan 31 05:04:58 np0005603787 lvm[126775]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:04:58 np0005603787 lvm[126775]: VG ceph_vg1 finished
Jan 31 05:04:58 np0005603787 lvm[126793]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:04:58 np0005603787 lvm[126793]: VG ceph_vg2 finished
Jan 31 05:04:58 np0005603787 sleepy_franklin[126634]: {}
Jan 31 05:04:58 np0005603787 systemd[1]: libpod-f6761399680a76fc7097aff1ca2a8bcf07588a5e978692b52a083c55e4cf43d1.scope: Deactivated successfully.
Jan 31 05:04:58 np0005603787 podman[126615]: 2026-01-31 10:04:58.529613638 +0000 UTC m=+0.831176277 container died f6761399680a76fc7097aff1ca2a8bcf07588a5e978692b52a083c55e4cf43d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_franklin, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:04:58 np0005603787 systemd[1]: var-lib-containers-storage-overlay-108b29030ce611cc751844d838f2d8f764385dc6a6f312b26d57f7dc1f2dde81-merged.mount: Deactivated successfully.
Jan 31 05:04:58 np0005603787 podman[126615]: 2026-01-31 10:04:58.577944311 +0000 UTC m=+0.879506920 container remove f6761399680a76fc7097aff1ca2a8bcf07588a5e978692b52a083c55e4cf43d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:04:58 np0005603787 systemd[1]: libpod-conmon-f6761399680a76fc7097aff1ca2a8bcf07588a5e978692b52a083c55e4cf43d1.scope: Deactivated successfully.
Jan 31 05:04:58 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:04:58 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:04:58 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:04:58 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:04:58 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:04:58 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:04:58 np0005603787 python3.9[126880]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:04:59 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:05:00 np0005603787 python3.9[127061]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:05:00 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Jan 31 05:05:00 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Jan 31 05:05:00 np0005603787 python3.9[127213]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:05:01 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:05:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:05:01 np0005603787 python3.9[127365]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:05:01 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Jan 31 05:05:01 np0005603787 ceph-osd[85879]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Jan 31 05:05:02 np0005603787 python3.9[127488]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769853901.1068437-60-183044469281451/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=9c4543a41edd17580da7295cee9548f13bcd3c81 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:05:02 np0005603787 python3.9[127640]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:05:03 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:05:03 np0005603787 python3.9[127763]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769853902.4096866-60-103331737290453/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=7ae25ebcc71c7c214bc10c0600bd5c3336cd3f83 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:05:03 np0005603787 python3.9[127915]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:05:04 np0005603787 python3.9[128038]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769853903.3667557-60-234585377709857/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=c9b830e9b282749bb872a9d189d3945ff24d10c3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:05:04 np0005603787 python3.9[128190]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:05:05 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:05:05 np0005603787 python3.9[128342]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:05:06 np0005603787 python3.9[128494]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:05:06 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:05:07 np0005603787 python3.9[128617]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769853905.7612095-119-174992095721072/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=e5e91ce4210c0e190adef95cfb36d626008acfec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:05:07 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:05:07 np0005603787 python3.9[128769]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:05:08 np0005603787 python3.9[128892]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769853907.217023-119-247017600777383/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=11b913189e57a4c0c9511586ff6e4247feaeb462 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:05:08 np0005603787 python3.9[129044]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:05:09 np0005603787 python3.9[129167]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769853908.1332881-119-94175207915198/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=a2c441c03b7b4103d7727a2f6f021452cefe58bf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:05:09 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:05:09 np0005603787 python3.9[129319]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:05:10 np0005603787 python3.9[129471]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:05:10 np0005603787 python3.9[129623]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:05:11 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:05:11 np0005603787 python3.9[129746]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769853910.5275915-178-177247293578378/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=4dbf72ce2b42d84af0f9dade3b7fce5964ffd4a4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:05:11 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:05:11 np0005603787 python3.9[129898]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:05:12 np0005603787 python3.9[130021]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769853911.5991528-178-58333102193047/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=11b913189e57a4c0c9511586ff6e4247feaeb462 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:05:13 np0005603787 python3.9[130173]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:05:13 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:05:13 np0005603787 python3.9[130296]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769853912.587732-178-254673381303710/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=68191a97e9480619cb9dc5100c06a959a579b376 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:05:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:05:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:05:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:05:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:05:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:05:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:05:14 np0005603787 python3.9[130448]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:05:15 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:05:15 np0005603787 python3.9[130600]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:05:15 np0005603787 systemd[1]: session-18.scope: Deactivated successfully.
Jan 31 05:05:15 np0005603787 systemd[1]: session-18.scope: Consumed 1min 24.074s CPU time.
Jan 31 05:05:15 np0005603787 systemd-logind[786]: Session 18 logged out. Waiting for processes to exit.
Jan 31 05:05:15 np0005603787 systemd-logind[786]: Removed session 18.
Jan 31 05:05:15 np0005603787 python3.9[130723]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769853914.909945-246-37695410500730/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ab7d5e470c6e190b74372f300d98064504b36836 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:05:16 np0005603787 python3.9[130876]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:05:16 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:05:17 np0005603787 python3.9[131028]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:05:17 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:05:17 np0005603787 python3.9[131151]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769853916.6155026-270-99198483651782/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ab7d5e470c6e190b74372f300d98064504b36836 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:05:18 np0005603787 python3.9[131303]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:05:18 np0005603787 python3.9[131455]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:05:19 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:05:19 np0005603787 python3.9[131578]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769853918.5092208-294-92907907332826/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ab7d5e470c6e190b74372f300d98064504b36836 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:05:20 np0005603787 python3.9[131730]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:05:20 np0005603787 python3.9[131882]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:05:21 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:05:21 np0005603787 python3.9[132005]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769853920.2100787-318-104548665253466/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ab7d5e470c6e190b74372f300d98064504b36836 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:05:21 np0005603787 python3.9[132157]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:05:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:05:22 np0005603787 python3.9[132309]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:05:22 np0005603787 python3.9[132432]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769853922.0173626-342-260320963607276/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ab7d5e470c6e190b74372f300d98064504b36836 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:05:23 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:05:23 np0005603787 python3.9[132584]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:05:24 np0005603787 python3.9[132736]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:05:24 np0005603787 python3.9[132859]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769853923.7977195-366-260019949850378/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ab7d5e470c6e190b74372f300d98064504b36836 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:05:25 np0005603787 systemd[1]: session-44.scope: Deactivated successfully.
Jan 31 05:05:25 np0005603787 systemd[1]: session-44.scope: Consumed 18.977s CPU time.
Jan 31 05:05:25 np0005603787 systemd-logind[786]: Session 44 logged out. Waiting for processes to exit.
Jan 31 05:05:25 np0005603787 systemd-logind[786]: Removed session 44.
Jan 31 05:05:25 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:05:26 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:05:27 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:05:29 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:05:30 np0005603787 systemd-logind[786]: New session 45 of user zuul.
Jan 31 05:05:30 np0005603787 systemd[1]: Started Session 45 of User zuul.
Jan 31 05:05:30 np0005603787 python3.9[133039]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:05:31 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:05:31 np0005603787 python3.9[133191]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:05:31 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:05:32 np0005603787 python3.9[133314]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769853931.056632-29-151397019862153/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=6238d334966e291f00ca4a59110821f30ba4f9b5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:05:32 np0005603787 python3.9[133466]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:05:33 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:05:33 np0005603787 python3.9[133589]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769853932.3679-29-43807929951324/.source.conf _original_basename=ceph.conf follow=False checksum=f4674c101d02560746db955b750ba272031f0f34 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:05:33 np0005603787 systemd[1]: session-45.scope: Deactivated successfully.
Jan 31 05:05:33 np0005603787 systemd[1]: session-45.scope: Consumed 2.333s CPU time.
Jan 31 05:05:33 np0005603787 systemd-logind[786]: Session 45 logged out. Waiting for processes to exit.
Jan 31 05:05:33 np0005603787 systemd-logind[786]: Removed session 45.
Jan 31 05:05:35 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:05:36 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:05:37 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:05:39 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:05:39 np0005603787 systemd-logind[786]: New session 46 of user zuul.
Jan 31 05:05:39 np0005603787 systemd[1]: Started Session 46 of User zuul.
Jan 31 05:05:40 np0005603787 python3.9[133767]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:05:41 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:05:41 np0005603787 python3.9[133923]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:05:41 np0005603787 python3.9[134075]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:05:41 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:05:42 np0005603787 python3.9[134225]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:05:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_10:05:43
Jan 31 05:05:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:05:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] do_upmap
Jan 31 05:05:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'images', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'vms']
Jan 31 05:05:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:05:43 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:05:43 np0005603787 python3.9[134377]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 31 05:05:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:05:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:05:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:05:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:05:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:05:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:05:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:05:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:05:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:05:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:05:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:05:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:05:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:05:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:05:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:05:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:05:44 np0005603787 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 31 05:05:45 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:05:45 np0005603787 python3.9[134533]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 05:05:46 np0005603787 python3.9[134617]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 05:05:46 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:05:47 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:05:48 np0005603787 python3.9[134770]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 05:05:49 np0005603787 python3[134925]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 31 05:05:49 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:05:49 np0005603787 python3.9[135077]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:05:50 np0005603787 python3.9[135229]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:05:51 np0005603787 python3.9[135307]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:05:51 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:05:51 np0005603787 python3.9[135459]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:05:51 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:05:52 np0005603787 python3.9[135537]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.86hrkf6k recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:05:52 np0005603787 python3.9[135689]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:05:53 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:05:53 np0005603787 python3.9[135767]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:05:53 np0005603787 python3.9[135919]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:05:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:05:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:05:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:05:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:05:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:05:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:05:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:05:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:05:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:05:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:05:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:05:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:05:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1786947556520692e-06 of space, bias 4.0, pg target 0.0014144337067824831 quantized to 16 (current 16)
Jan 31 05:05:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:05:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:05:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:05:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:05:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:05:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 05:05:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:05:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:05:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:05:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:05:54 np0005603787 python3[136072]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 05:05:55 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:05:55 np0005603787 python3.9[136224]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:05:55 np0005603787 python3.9[136349]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769853954.811624-152-201403877411595/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:05:56 np0005603787 python3.9[136501]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:05:56 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:05:57 np0005603787 python3.9[136626]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769853956.0421226-167-59479808397104/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:05:57 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:05:57 np0005603787 python3.9[136778]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:05:58 np0005603787 python3.9[136903]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769853957.1921778-182-180152849945015/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:05:58 np0005603787 python3.9[137055]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:05:59 np0005603787 python3.9[137230]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769853958.2453887-197-270141757934649/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:05:59 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:05:59 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:05:59 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:05:59 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:05:59 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:05:59 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:05:59 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:05:59 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:05:59 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:05:59 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:05:59 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:05:59 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:05:59 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:05:59 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:05:59 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:05:59 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:05:59 np0005603787 podman[137447]: 2026-01-31 10:05:59.584198882 +0000 UTC m=+0.037943671 container create 18565f2eda17246f6c32397d03fc5100664223d6715f40b502a59e86e21112f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_williams, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:05:59 np0005603787 systemd[1]: Started libpod-conmon-18565f2eda17246f6c32397d03fc5100664223d6715f40b502a59e86e21112f1.scope.
Jan 31 05:05:59 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:05:59 np0005603787 podman[137447]: 2026-01-31 10:05:59.563724597 +0000 UTC m=+0.017469406 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:05:59 np0005603787 podman[137447]: 2026-01-31 10:05:59.669067287 +0000 UTC m=+0.122812096 container init 18565f2eda17246f6c32397d03fc5100664223d6715f40b502a59e86e21112f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:05:59 np0005603787 podman[137447]: 2026-01-31 10:05:59.676470733 +0000 UTC m=+0.130215522 container start 18565f2eda17246f6c32397d03fc5100664223d6715f40b502a59e86e21112f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_williams, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:05:59 np0005603787 podman[137447]: 2026-01-31 10:05:59.679533495 +0000 UTC m=+0.133278284 container attach 18565f2eda17246f6c32397d03fc5100664223d6715f40b502a59e86e21112f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_williams, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 05:05:59 np0005603787 focused_williams[137491]: 167 167
Jan 31 05:05:59 np0005603787 systemd[1]: libpod-18565f2eda17246f6c32397d03fc5100664223d6715f40b502a59e86e21112f1.scope: Deactivated successfully.
Jan 31 05:05:59 np0005603787 podman[137447]: 2026-01-31 10:05:59.681734423 +0000 UTC m=+0.135479252 container died 18565f2eda17246f6c32397d03fc5100664223d6715f40b502a59e86e21112f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:05:59 np0005603787 systemd[1]: var-lib-containers-storage-overlay-ddaf365fc78bc89a1dde78466adc37959eeb80a7d5f5bdee2dbbb695800160c2-merged.mount: Deactivated successfully.
Jan 31 05:05:59 np0005603787 podman[137447]: 2026-01-31 10:05:59.730348485 +0000 UTC m=+0.184093284 container remove 18565f2eda17246f6c32397d03fc5100664223d6715f40b502a59e86e21112f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:05:59 np0005603787 systemd[1]: libpod-conmon-18565f2eda17246f6c32397d03fc5100664223d6715f40b502a59e86e21112f1.scope: Deactivated successfully.
Jan 31 05:05:59 np0005603787 python3.9[137488]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:05:59 np0005603787 podman[137518]: 2026-01-31 10:05:59.846859522 +0000 UTC m=+0.035499854 container create 113bbe7cf161298ab40846365991609c0b5474eef636a8ec248e4dcb6706ddd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 05:05:59 np0005603787 systemd[1]: Started libpod-conmon-113bbe7cf161298ab40846365991609c0b5474eef636a8ec248e4dcb6706ddd4.scope.
Jan 31 05:05:59 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:05:59 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09ce248d5b37c65a4df488dae9da12f5ddbe493b943423918839b6a04c7bc5c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:05:59 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09ce248d5b37c65a4df488dae9da12f5ddbe493b943423918839b6a04c7bc5c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:05:59 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09ce248d5b37c65a4df488dae9da12f5ddbe493b943423918839b6a04c7bc5c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:05:59 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09ce248d5b37c65a4df488dae9da12f5ddbe493b943423918839b6a04c7bc5c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:05:59 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09ce248d5b37c65a4df488dae9da12f5ddbe493b943423918839b6a04c7bc5c6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:05:59 np0005603787 podman[137518]: 2026-01-31 10:05:59.831150274 +0000 UTC m=+0.019790746 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:05:59 np0005603787 podman[137518]: 2026-01-31 10:05:59.933792702 +0000 UTC m=+0.122433114 container init 113bbe7cf161298ab40846365991609c0b5474eef636a8ec248e4dcb6706ddd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_jackson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:05:59 np0005603787 podman[137518]: 2026-01-31 10:05:59.946231073 +0000 UTC m=+0.134871445 container start 113bbe7cf161298ab40846365991609c0b5474eef636a8ec248e4dcb6706ddd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 05:05:59 np0005603787 podman[137518]: 2026-01-31 10:05:59.950374533 +0000 UTC m=+0.139014905 container attach 113bbe7cf161298ab40846365991609c0b5474eef636a8ec248e4dcb6706ddd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_jackson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:06:00 np0005603787 python3.9[137664]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769853959.3036828-212-196410674847382/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:06:00 np0005603787 condescending_jackson[137582]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:06:00 np0005603787 condescending_jackson[137582]: --> All data devices are unavailable
Jan 31 05:06:00 np0005603787 systemd[1]: libpod-113bbe7cf161298ab40846365991609c0b5474eef636a8ec248e4dcb6706ddd4.scope: Deactivated successfully.
Jan 31 05:06:00 np0005603787 podman[137518]: 2026-01-31 10:06:00.365224417 +0000 UTC m=+0.553864749 container died 113bbe7cf161298ab40846365991609c0b5474eef636a8ec248e4dcb6706ddd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 05:06:00 np0005603787 systemd[1]: var-lib-containers-storage-overlay-09ce248d5b37c65a4df488dae9da12f5ddbe493b943423918839b6a04c7bc5c6-merged.mount: Deactivated successfully.
Jan 31 05:06:00 np0005603787 podman[137518]: 2026-01-31 10:06:00.402638232 +0000 UTC m=+0.591278564 container remove 113bbe7cf161298ab40846365991609c0b5474eef636a8ec248e4dcb6706ddd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_jackson, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 05:06:00 np0005603787 systemd[1]: libpod-conmon-113bbe7cf161298ab40846365991609c0b5474eef636a8ec248e4dcb6706ddd4.scope: Deactivated successfully.
Jan 31 05:06:00 np0005603787 podman[137904]: 2026-01-31 10:06:00.76453119 +0000 UTC m=+0.033137752 container create b84d54f84f4566fc92b5e4bb79760814bd8264f53e83e8c9ea37f061b6fb0baa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_shockley, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 05:06:00 np0005603787 systemd[1]: Started libpod-conmon-b84d54f84f4566fc92b5e4bb79760814bd8264f53e83e8c9ea37f061b6fb0baa.scope.
Jan 31 05:06:00 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:06:00 np0005603787 podman[137904]: 2026-01-31 10:06:00.750940339 +0000 UTC m=+0.019546921 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:06:00 np0005603787 podman[137904]: 2026-01-31 10:06:00.851690566 +0000 UTC m=+0.120297148 container init b84d54f84f4566fc92b5e4bb79760814bd8264f53e83e8c9ea37f061b6fb0baa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_shockley, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 05:06:00 np0005603787 podman[137904]: 2026-01-31 10:06:00.858763754 +0000 UTC m=+0.127370326 container start b84d54f84f4566fc92b5e4bb79760814bd8264f53e83e8c9ea37f061b6fb0baa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_shockley, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 05:06:00 np0005603787 podman[137904]: 2026-01-31 10:06:00.862642427 +0000 UTC m=+0.131249019 container attach b84d54f84f4566fc92b5e4bb79760814bd8264f53e83e8c9ea37f061b6fb0baa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_shockley, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 05:06:00 np0005603787 agitated_shockley[137920]: 167 167
Jan 31 05:06:00 np0005603787 systemd[1]: libpod-b84d54f84f4566fc92b5e4bb79760814bd8264f53e83e8c9ea37f061b6fb0baa.scope: Deactivated successfully.
Jan 31 05:06:00 np0005603787 conmon[137920]: conmon b84d54f84f4566fc92b5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b84d54f84f4566fc92b5e4bb79760814bd8264f53e83e8c9ea37f061b6fb0baa.scope/container/memory.events
Jan 31 05:06:00 np0005603787 podman[137904]: 2026-01-31 10:06:00.86577137 +0000 UTC m=+0.134377972 container died b84d54f84f4566fc92b5e4bb79760814bd8264f53e83e8c9ea37f061b6fb0baa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 05:06:00 np0005603787 python3.9[137903]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:06:00 np0005603787 systemd[1]: var-lib-containers-storage-overlay-a88d0242a825615fc375e0dfb165d79109a9d6364dfc43e9711fe94149090260-merged.mount: Deactivated successfully.
Jan 31 05:06:00 np0005603787 podman[137904]: 2026-01-31 10:06:00.912753108 +0000 UTC m=+0.181359670 container remove b84d54f84f4566fc92b5e4bb79760814bd8264f53e83e8c9ea37f061b6fb0baa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:06:00 np0005603787 systemd[1]: libpod-conmon-b84d54f84f4566fc92b5e4bb79760814bd8264f53e83e8c9ea37f061b6fb0baa.scope: Deactivated successfully.
Jan 31 05:06:01 np0005603787 podman[137968]: 2026-01-31 10:06:01.053781396 +0000 UTC m=+0.043795294 container create 6d22635a793da5fc7648b9f09d106efc42277354cf26e2555b921d0be0875de5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_dewdney, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:06:01 np0005603787 systemd[1]: Started libpod-conmon-6d22635a793da5fc7648b9f09d106efc42277354cf26e2555b921d0be0875de5.scope.
Jan 31 05:06:01 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:06:01 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8b8e335ade92d38f8823f67ee7f61a5f0d1f74426ab930d808844776e189c11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:06:01 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8b8e335ade92d38f8823f67ee7f61a5f0d1f74426ab930d808844776e189c11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:06:01 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8b8e335ade92d38f8823f67ee7f61a5f0d1f74426ab930d808844776e189c11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:06:01 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8b8e335ade92d38f8823f67ee7f61a5f0d1f74426ab930d808844776e189c11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:06:01 np0005603787 podman[137968]: 2026-01-31 10:06:01.128546344 +0000 UTC m=+0.118560242 container init 6d22635a793da5fc7648b9f09d106efc42277354cf26e2555b921d0be0875de5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_dewdney, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:06:01 np0005603787 podman[137968]: 2026-01-31 10:06:01.035241634 +0000 UTC m=+0.025255552 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:06:01 np0005603787 podman[137968]: 2026-01-31 10:06:01.1344308 +0000 UTC m=+0.124444678 container start 6d22635a793da5fc7648b9f09d106efc42277354cf26e2555b921d0be0875de5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 31 05:06:01 np0005603787 podman[137968]: 2026-01-31 10:06:01.138417486 +0000 UTC m=+0.128431364 container attach 6d22635a793da5fc7648b9f09d106efc42277354cf26e2555b921d0be0875de5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 05:06:01 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]: {
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:    "0": [
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:        {
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:            "devices": [
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "/dev/loop3"
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:            ],
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:            "lv_name": "ceph_lv0",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:            "lv_size": "21470642176",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:            "name": "ceph_lv0",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:            "tags": {
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "ceph.cluster_name": "ceph",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "ceph.crush_device_class": "",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "ceph.encrypted": "0",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "ceph.objectstore": "bluestore",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "ceph.osd_id": "0",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "ceph.type": "block",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "ceph.vdo": "0",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "ceph.with_tpm": "0"
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:            },
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:            "type": "block",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:            "vg_name": "ceph_vg0"
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:        }
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:    ],
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:    "1": [
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:        {
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:            "devices": [
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "/dev/loop4"
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:            ],
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:            "lv_name": "ceph_lv1",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:            "lv_size": "21470642176",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:            "name": "ceph_lv1",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:            "tags": {
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "ceph.cluster_name": "ceph",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "ceph.crush_device_class": "",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "ceph.encrypted": "0",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "ceph.objectstore": "bluestore",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "ceph.osd_id": "1",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "ceph.type": "block",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "ceph.vdo": "0",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "ceph.with_tpm": "0"
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:            },
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:            "type": "block",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:            "vg_name": "ceph_vg1"
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:        }
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:    ],
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:    "2": [
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:        {
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:            "devices": [
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "/dev/loop5"
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:            ],
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:            "lv_name": "ceph_lv2",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:            "lv_size": "21470642176",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:            "name": "ceph_lv2",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:            "tags": {
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "ceph.cluster_name": "ceph",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "ceph.crush_device_class": "",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "ceph.encrypted": "0",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "ceph.objectstore": "bluestore",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "ceph.osd_id": "2",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "ceph.type": "block",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "ceph.vdo": "0",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:                "ceph.with_tpm": "0"
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:            },
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:            "type": "block",
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:            "vg_name": "ceph_vg2"
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:        }
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]:    ]
Jan 31 05:06:01 np0005603787 wonderful_dewdney[138013]: }
Jan 31 05:06:01 np0005603787 systemd[1]: libpod-6d22635a793da5fc7648b9f09d106efc42277354cf26e2555b921d0be0875de5.scope: Deactivated successfully.
Jan 31 05:06:01 np0005603787 podman[137968]: 2026-01-31 10:06:01.466807943 +0000 UTC m=+0.456821851 container died 6d22635a793da5fc7648b9f09d106efc42277354cf26e2555b921d0be0875de5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_dewdney, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:06:01 np0005603787 systemd[1]: var-lib-containers-storage-overlay-f8b8e335ade92d38f8823f67ee7f61a5f0d1f74426ab930d808844776e189c11-merged.mount: Deactivated successfully.
Jan 31 05:06:01 np0005603787 python3.9[138118]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:06:01 np0005603787 podman[137968]: 2026-01-31 10:06:01.517049058 +0000 UTC m=+0.507062936 container remove 6d22635a793da5fc7648b9f09d106efc42277354cf26e2555b921d0be0875de5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_dewdney, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 05:06:01 np0005603787 systemd[1]: libpod-conmon-6d22635a793da5fc7648b9f09d106efc42277354cf26e2555b921d0be0875de5.scope: Deactivated successfully.
Jan 31 05:06:01 np0005603787 podman[138289]: 2026-01-31 10:06:01.922616247 +0000 UTC m=+0.042240335 container create a337ed17462e454d30b204e52612f92c9ebbd349b586d209776e25245dfefd96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:06:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:06:01 np0005603787 systemd[1]: Started libpod-conmon-a337ed17462e454d30b204e52612f92c9ebbd349b586d209776e25245dfefd96.scope.
Jan 31 05:06:01 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:06:01 np0005603787 podman[138289]: 2026-01-31 10:06:01.898483525 +0000 UTC m=+0.018107643 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:06:02 np0005603787 podman[138289]: 2026-01-31 10:06:02.006488826 +0000 UTC m=+0.126113014 container init a337ed17462e454d30b204e52612f92c9ebbd349b586d209776e25245dfefd96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_carver, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 31 05:06:02 np0005603787 podman[138289]: 2026-01-31 10:06:02.012508826 +0000 UTC m=+0.132132924 container start a337ed17462e454d30b204e52612f92c9ebbd349b586d209776e25245dfefd96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_carver, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:06:02 np0005603787 podman[138289]: 2026-01-31 10:06:02.01719072 +0000 UTC m=+0.136814848 container attach a337ed17462e454d30b204e52612f92c9ebbd349b586d209776e25245dfefd96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_carver, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 05:06:02 np0005603787 naughty_carver[138339]: 167 167
Jan 31 05:06:02 np0005603787 systemd[1]: libpod-a337ed17462e454d30b204e52612f92c9ebbd349b586d209776e25245dfefd96.scope: Deactivated successfully.
Jan 31 05:06:02 np0005603787 conmon[138339]: conmon a337ed17462e454d30b2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a337ed17462e454d30b204e52612f92c9ebbd349b586d209776e25245dfefd96.scope/container/memory.events
Jan 31 05:06:02 np0005603787 podman[138289]: 2026-01-31 10:06:02.018670749 +0000 UTC m=+0.138294847 container died a337ed17462e454d30b204e52612f92c9ebbd349b586d209776e25245dfefd96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_carver, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:06:02 np0005603787 systemd[1]: var-lib-containers-storage-overlay-d76593c1db6ddb59710a808ede293db6b28cbd4ae5015f4116aa7e2ea3415b84-merged.mount: Deactivated successfully.
Jan 31 05:06:02 np0005603787 podman[138289]: 2026-01-31 10:06:02.057666525 +0000 UTC m=+0.177290613 container remove a337ed17462e454d30b204e52612f92c9ebbd349b586d209776e25245dfefd96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_carver, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 05:06:02 np0005603787 systemd[1]: libpod-conmon-a337ed17462e454d30b204e52612f92c9ebbd349b586d209776e25245dfefd96.scope: Deactivated successfully.
Jan 31 05:06:02 np0005603787 python3.9[138383]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:06:02 np0005603787 podman[138391]: 2026-01-31 10:06:02.245776044 +0000 UTC m=+0.052187017 container create c584034cd649c126cb3fcf0b3d566185fbce2e112e7b4a82636a6342f895c11d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_banzai, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 05:06:02 np0005603787 systemd[1]: Started libpod-conmon-c584034cd649c126cb3fcf0b3d566185fbce2e112e7b4a82636a6342f895c11d.scope.
Jan 31 05:06:02 np0005603787 podman[138391]: 2026-01-31 10:06:02.224566411 +0000 UTC m=+0.030977424 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:06:02 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:06:02 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1288d6141f1d7795c28a5b6cf5ccce40c9e25785b3960ef87ccfde8ab6a9274/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:06:02 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1288d6141f1d7795c28a5b6cf5ccce40c9e25785b3960ef87ccfde8ab6a9274/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:06:02 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1288d6141f1d7795c28a5b6cf5ccce40c9e25785b3960ef87ccfde8ab6a9274/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:06:02 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1288d6141f1d7795c28a5b6cf5ccce40c9e25785b3960ef87ccfde8ab6a9274/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:06:02 np0005603787 podman[138391]: 2026-01-31 10:06:02.348880085 +0000 UTC m=+0.155291088 container init c584034cd649c126cb3fcf0b3d566185fbce2e112e7b4a82636a6342f895c11d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:06:02 np0005603787 podman[138391]: 2026-01-31 10:06:02.366617876 +0000 UTC m=+0.173028849 container start c584034cd649c126cb3fcf0b3d566185fbce2e112e7b4a82636a6342f895c11d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:06:02 np0005603787 podman[138391]: 2026-01-31 10:06:02.3705255 +0000 UTC m=+0.176936503 container attach c584034cd649c126cb3fcf0b3d566185fbce2e112e7b4a82636a6342f895c11d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 05:06:02 np0005603787 python3.9[138574]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:06:02 np0005603787 lvm[138661]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:06:02 np0005603787 lvm[138661]: VG ceph_vg0 finished
Jan 31 05:06:02 np0005603787 lvm[138664]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:06:02 np0005603787 lvm[138664]: VG ceph_vg1 finished
Jan 31 05:06:02 np0005603787 lvm[138673]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:06:02 np0005603787 lvm[138673]: VG ceph_vg2 finished
Jan 31 05:06:03 np0005603787 zealous_banzai[138432]: {}
Jan 31 05:06:03 np0005603787 systemd[1]: libpod-c584034cd649c126cb3fcf0b3d566185fbce2e112e7b4a82636a6342f895c11d.scope: Deactivated successfully.
Jan 31 05:06:03 np0005603787 podman[138391]: 2026-01-31 10:06:03.081759292 +0000 UTC m=+0.888170285 container died c584034cd649c126cb3fcf0b3d566185fbce2e112e7b4a82636a6342f895c11d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_banzai, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:06:03 np0005603787 systemd[1]: var-lib-containers-storage-overlay-b1288d6141f1d7795c28a5b6cf5ccce40c9e25785b3960ef87ccfde8ab6a9274-merged.mount: Deactivated successfully.
Jan 31 05:06:03 np0005603787 podman[138391]: 2026-01-31 10:06:03.124270921 +0000 UTC m=+0.930681884 container remove c584034cd649c126cb3fcf0b3d566185fbce2e112e7b4a82636a6342f895c11d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_banzai, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 05:06:03 np0005603787 systemd[1]: libpod-conmon-c584034cd649c126cb3fcf0b3d566185fbce2e112e7b4a82636a6342f895c11d.scope: Deactivated successfully.
Jan 31 05:06:03 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:06:03 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:06:03 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:06:03 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:06:03 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:06:03 np0005603787 python3.9[138817]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:06:03 np0005603787 python3.9[138988]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:06:04 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:06:04 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:06:04 np0005603787 python3.9[139143]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:06:05 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:06:05 np0005603787 python3.9[139293]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:06:06 np0005603787 python3.9[139446]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:9e:41:65:cf" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:06:06 np0005603787 ovs-vsctl[139447]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:9e:41:65:cf external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 31 05:06:06 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:06:07 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:06:07 np0005603787 python3.9[139599]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:06:07 np0005603787 python3.9[139754]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:06:07 np0005603787 ovs-vsctl[139755]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 31 05:06:08 np0005603787 python3.9[139905]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:06:09 np0005603787 python3.9[140059]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:06:09 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:06:09 np0005603787 python3.9[140211]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:06:10 np0005603787 python3.9[140289]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:06:10 np0005603787 python3.9[140441]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:06:11 np0005603787 python3.9[140519]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:06:11 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:06:11 np0005603787 python3.9[140671]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:06:11 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:06:12 np0005603787 python3.9[140823]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:06:12 np0005603787 python3.9[140901]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:06:13 np0005603787 python3.9[141053]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:06:13 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:06:13 np0005603787 python3.9[141131]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:06:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:06:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:06:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:06:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:06:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:06:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:06:14 np0005603787 python3.9[141283]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:06:14 np0005603787 systemd[1]: Reloading.
Jan 31 05:06:14 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:06:14 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:06:15 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:06:15 np0005603787 python3.9[141472]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:06:15 np0005603787 python3.9[141550]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:06:16 np0005603787 python3.9[141702]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:06:16 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:06:17 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:06:17 np0005603787 python3.9[141780]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:06:17 np0005603787 python3.9[141932]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:06:17 np0005603787 systemd[1]: Reloading.
Jan 31 05:06:18 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:06:18 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:06:18 np0005603787 systemd[1]: Starting Create netns directory...
Jan 31 05:06:18 np0005603787 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 05:06:18 np0005603787 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 05:06:18 np0005603787 systemd[1]: Finished Create netns directory.
Jan 31 05:06:18 np0005603787 python3.9[142125]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:06:19 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:06:19 np0005603787 python3.9[142277]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:06:20 np0005603787 python3.9[142400]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769853979.108874-463-116547139753756/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:06:20 np0005603787 python3.9[142552]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:06:21 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:06:21 np0005603787 python3.9[142704]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:06:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:06:22 np0005603787 python3.9[142856]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:06:22 np0005603787 python3.9[142979]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769853981.767968-496-105547202475417/.source.json _original_basename=.gi36h_tt follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:06:23 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:06:23 np0005603787 python3.9[143129]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:06:25 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:06:25 np0005603787 python3.9[143552]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 31 05:06:26 np0005603787 python3.9[143704]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 05:06:26 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:06:27 np0005603787 python3[143856]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 05:06:27 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:06:29 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:06:31 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:06:31 np0005603787 podman[143870]: 2026-01-31 10:06:31.391946186 +0000 UTC m=+4.183596024 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 31 05:06:31 np0005603787 podman[143990]: 2026-01-31 10:06:31.500943063 +0000 UTC m=+0.031289903 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 31 05:06:31 np0005603787 podman[143990]: 2026-01-31 10:06:31.640491281 +0000 UTC m=+0.170838151 container create da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Jan 31 05:06:31 np0005603787 python3[143856]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 31 05:06:31 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:06:31 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Jan 31 05:06:31 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:06:31.967180) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 05:06:31 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Jan 31 05:06:31 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769853991967200, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1590, "num_deletes": 250, "total_data_size": 2327932, "memory_usage": 2369848, "flush_reason": "Manual Compaction"}
Jan 31 05:06:31 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Jan 31 05:06:31 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769853991974809, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1361966, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7416, "largest_seqno": 9005, "table_properties": {"data_size": 1356692, "index_size": 2350, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 15353, "raw_average_key_size": 20, "raw_value_size": 1344223, "raw_average_value_size": 1816, "num_data_blocks": 111, "num_entries": 740, "num_filter_entries": 740, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769853841, "oldest_key_time": 1769853841, "file_creation_time": 1769853991, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a9067dea-12fb-43c2-8d5c-dbf66227f0e8", "db_session_id": "EXKALWXQ4I64EWVUMKE5", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:06:31 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 7663 microseconds, and 2541 cpu microseconds.
Jan 31 05:06:31 np0005603787 ceph-mon[75160]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 05:06:31 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:06:31.974841) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1361966 bytes OK
Jan 31 05:06:31 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:06:31.974854) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Jan 31 05:06:31 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:06:31.976243) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Jan 31 05:06:31 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:06:31.976255) EVENT_LOG_v1 {"time_micros": 1769853991976251, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 05:06:31 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:06:31.976268) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 05:06:31 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2320800, prev total WAL file size 2320800, number of live WAL files 2.
Jan 31 05:06:31 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:06:31 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:06:31.976678) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Jan 31 05:06:31 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 05:06:31 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1330KB)], [20(7674KB)]
Jan 31 05:06:31 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769853991976735, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 9220592, "oldest_snapshot_seqno": -1}
Jan 31 05:06:32 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3411 keys, 7166761 bytes, temperature: kUnknown
Jan 31 05:06:32 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769853992012558, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 7166761, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7140672, "index_size": 16443, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8581, "raw_key_size": 81685, "raw_average_key_size": 23, "raw_value_size": 7075791, "raw_average_value_size": 2074, "num_data_blocks": 728, "num_entries": 3411, "num_filter_entries": 3411, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769853439, "oldest_key_time": 0, "file_creation_time": 1769853991, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a9067dea-12fb-43c2-8d5c-dbf66227f0e8", "db_session_id": "EXKALWXQ4I64EWVUMKE5", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:06:32 np0005603787 ceph-mon[75160]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 05:06:32 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:06:32.012769) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 7166761 bytes
Jan 31 05:06:32 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:06:32.014141) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 257.0 rd, 199.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 7.5 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(12.0) write-amplify(5.3) OK, records in: 3853, records dropped: 442 output_compression: NoCompression
Jan 31 05:06:32 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:06:32.014165) EVENT_LOG_v1 {"time_micros": 1769853992014155, "job": 6, "event": "compaction_finished", "compaction_time_micros": 35877, "compaction_time_cpu_micros": 13928, "output_level": 6, "num_output_files": 1, "total_output_size": 7166761, "num_input_records": 3853, "num_output_records": 3411, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 05:06:32 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:06:32 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769853992014412, "job": 6, "event": "table_file_deletion", "file_number": 22}
Jan 31 05:06:32 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:06:32 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769853992014927, "job": 6, "event": "table_file_deletion", "file_number": 20}
Jan 31 05:06:32 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:06:31.976601) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:06:32 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:06:32.015008) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:06:32 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:06:32.015014) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:06:32 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:06:32.015016) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:06:32 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:06:32.015020) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:06:32 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:06:32.015022) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:06:32 np0005603787 python3.9[144180]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:06:33 np0005603787 python3.9[144334]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:06:33 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:06:33 np0005603787 python3.9[144410]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:06:34 np0005603787 python3.9[144561]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769853993.5845642-574-25632245064677/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:06:34 np0005603787 python3.9[144637]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 05:06:34 np0005603787 systemd[1]: Reloading.
Jan 31 05:06:34 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:06:34 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:06:35 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:06:35 np0005603787 python3.9[144748]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:06:35 np0005603787 systemd[1]: Reloading.
Jan 31 05:06:35 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:06:35 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:06:35 np0005603787 systemd[1]: Starting ovn_controller container...
Jan 31 05:06:35 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:06:35 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76d4c99594b68a6aab1f2b9447283e6191c990bd1afe93908e981b069427bce4/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 31 05:06:36 np0005603787 systemd[1]: Started /usr/bin/podman healthcheck run da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209.
Jan 31 05:06:36 np0005603787 podman[144790]: 2026-01-31 10:06:36.023428341 +0000 UTC m=+0.215117320 container init da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller)
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: + sudo -E kolla_set_configs
Jan 31 05:06:36 np0005603787 podman[144790]: 2026-01-31 10:06:36.049897509 +0000 UTC m=+0.241586408 container start da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 05:06:36 np0005603787 edpm-start-podman-container[144790]: ovn_controller
Jan 31 05:06:36 np0005603787 systemd[1]: Created slice User Slice of UID 0.
Jan 31 05:06:36 np0005603787 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 31 05:06:36 np0005603787 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 31 05:06:36 np0005603787 systemd[1]: Starting User Manager for UID 0...
Jan 31 05:06:36 np0005603787 edpm-start-podman-container[144789]: Creating additional drop-in dependency for "ovn_controller" (da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209)
Jan 31 05:06:36 np0005603787 podman[144812]: 2026-01-31 10:06:36.111722035 +0000 UTC m=+0.054046006 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 31 05:06:36 np0005603787 systemd[1]: da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209-5e5fcb224e9778f8.service: Main process exited, code=exited, status=1/FAILURE
Jan 31 05:06:36 np0005603787 systemd[1]: da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209-5e5fcb224e9778f8.service: Failed with result 'exit-code'.
Jan 31 05:06:36 np0005603787 systemd[1]: Reloading.
Jan 31 05:06:36 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:06:36 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:06:36 np0005603787 systemd[144843]: Queued start job for default target Main User Target.
Jan 31 05:06:36 np0005603787 systemd[144843]: Created slice User Application Slice.
Jan 31 05:06:36 np0005603787 systemd[144843]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 31 05:06:36 np0005603787 systemd[144843]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 05:06:36 np0005603787 systemd[144843]: Reached target Paths.
Jan 31 05:06:36 np0005603787 systemd[144843]: Reached target Timers.
Jan 31 05:06:36 np0005603787 systemd[144843]: Starting D-Bus User Message Bus Socket...
Jan 31 05:06:36 np0005603787 systemd[144843]: Starting Create User's Volatile Files and Directories...
Jan 31 05:06:36 np0005603787 systemd[144843]: Listening on D-Bus User Message Bus Socket.
Jan 31 05:06:36 np0005603787 systemd[144843]: Reached target Sockets.
Jan 31 05:06:36 np0005603787 systemd[144843]: Finished Create User's Volatile Files and Directories.
Jan 31 05:06:36 np0005603787 systemd[144843]: Reached target Basic System.
Jan 31 05:06:36 np0005603787 systemd[144843]: Reached target Main User Target.
Jan 31 05:06:36 np0005603787 systemd[144843]: Startup finished in 128ms.
Jan 31 05:06:36 np0005603787 systemd[1]: Started User Manager for UID 0.
Jan 31 05:06:36 np0005603787 systemd[1]: Started ovn_controller container.
Jan 31 05:06:36 np0005603787 systemd[1]: Started Session c1 of User root.
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: INFO:__main__:Validating config file
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: INFO:__main__:Writing out command to execute
Jan 31 05:06:36 np0005603787 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: ++ cat /run_command
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: + ARGS=
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: + sudo kolla_copy_cacerts
Jan 31 05:06:36 np0005603787 systemd[1]: Started Session c2 of User root.
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: + [[ ! -n '' ]]
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: + . kolla_extend_start
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: + umask 0022
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 31 05:06:36 np0005603787 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: 2026-01-31T10:06:36Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: 2026-01-31T10:06:36Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: 2026-01-31T10:06:36Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: 2026-01-31T10:06:36Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: 2026-01-31T10:06:36Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: 2026-01-31T10:06:36Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 31 05:06:36 np0005603787 NetworkManager[48992]: <info>  [1769853996.4886] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Jan 31 05:06:36 np0005603787 NetworkManager[48992]: <info>  [1769853996.4894] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 05:06:36 np0005603787 NetworkManager[48992]: <warn>  [1769853996.4896] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 05:06:36 np0005603787 NetworkManager[48992]: <info>  [1769853996.4907] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 31 05:06:36 np0005603787 NetworkManager[48992]: <info>  [1769853996.4915] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Jan 31 05:06:36 np0005603787 NetworkManager[48992]: <info>  [1769853996.4920] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 31 05:06:36 np0005603787 kernel: br-int: entered promiscuous mode
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: 2026-01-31T10:06:36Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: 2026-01-31T10:06:36Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: 2026-01-31T10:06:36Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: 2026-01-31T10:06:36Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: 2026-01-31T10:06:36Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: 2026-01-31T10:06:36Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: 2026-01-31T10:06:36Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: 2026-01-31T10:06:36Z|00014|main|INFO|OVS feature set changed, force recompute.
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: 2026-01-31T10:06:36Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: 2026-01-31T10:06:36Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: 2026-01-31T10:06:36Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: 2026-01-31T10:06:36Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: 2026-01-31T10:06:36Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: 2026-01-31T10:06:36Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: 2026-01-31T10:06:36Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: 2026-01-31T10:06:36Z|00022|main|INFO|OVS feature set changed, force recompute.
Jan 31 05:06:36 np0005603787 systemd-udevd[144939]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: 2026-01-31T10:06:36Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: 2026-01-31T10:06:36Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: 2026-01-31T10:06:36Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: 2026-01-31T10:06:36Z|00001|statctrl(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: 2026-01-31T10:06:36Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: 2026-01-31T10:06:36Z|00002|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: 2026-01-31T10:06:36Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 05:06:36 np0005603787 ovn_controller[144805]: 2026-01-31T10:06:36Z|00003|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 05:06:36 np0005603787 NetworkManager[48992]: <info>  [1769853996.5165] manager: (ovn-d4bfdd-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 31 05:06:36 np0005603787 NetworkManager[48992]: <info>  [1769853996.5316] device (genev_sys_6081): carrier: link connected
Jan 31 05:06:36 np0005603787 kernel: genev_sys_6081: entered promiscuous mode
Jan 31 05:06:36 np0005603787 NetworkManager[48992]: <info>  [1769853996.5321] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Jan 31 05:06:36 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:06:37 np0005603787 python3.9[145069]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 31 05:06:37 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:06:37 np0005603787 python3.9[145221]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:06:38 np0005603787 python3.9[145344]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769853997.5686471-619-152820647188757/.source.yaml _original_basename=.9p2xpqg_ follow=False checksum=15ea23051c8d5fd933c5e6ce193957baf36ec626 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:06:38 np0005603787 python3.9[145496]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:06:38 np0005603787 ovs-vsctl[145497]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 31 05:06:39 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:06:39 np0005603787 python3.9[145649]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:06:39 np0005603787 ovs-vsctl[145651]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 31 05:06:40 np0005603787 python3.9[145804]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:06:40 np0005603787 ovs-vsctl[145805]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 31 05:06:40 np0005603787 systemd[1]: session-46.scope: Deactivated successfully.
Jan 31 05:06:40 np0005603787 systemd[1]: session-46.scope: Consumed 47.972s CPU time.
Jan 31 05:06:40 np0005603787 systemd-logind[786]: Session 46 logged out. Waiting for processes to exit.
Jan 31 05:06:40 np0005603787 systemd-logind[786]: Removed session 46.
Jan 31 05:06:41 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:06:41 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:06:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_10:06:43
Jan 31 05:06:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:06:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] do_upmap
Jan 31 05:06:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'backups', 'images', 'vms', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log']
Jan 31 05:06:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:06:43 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:06:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:06:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:06:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:06:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:06:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:06:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:06:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:06:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:06:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:06:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:06:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:06:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:06:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:06:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:06:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:06:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:06:45 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:06:46 np0005603787 systemd-logind[786]: New session 48 of user zuul.
Jan 31 05:06:46 np0005603787 systemd[1]: Started Session 48 of User zuul.
Jan 31 05:06:46 np0005603787 systemd[1]: Stopping User Manager for UID 0...
Jan 31 05:06:46 np0005603787 systemd[144843]: Activating special unit Exit the Session...
Jan 31 05:06:46 np0005603787 systemd[144843]: Stopped target Main User Target.
Jan 31 05:06:46 np0005603787 systemd[144843]: Stopped target Basic System.
Jan 31 05:06:46 np0005603787 systemd[144843]: Stopped target Paths.
Jan 31 05:06:46 np0005603787 systemd[144843]: Stopped target Sockets.
Jan 31 05:06:46 np0005603787 systemd[144843]: Stopped target Timers.
Jan 31 05:06:46 np0005603787 systemd[144843]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 31 05:06:46 np0005603787 systemd[144843]: Closed D-Bus User Message Bus Socket.
Jan 31 05:06:46 np0005603787 systemd[144843]: Stopped Create User's Volatile Files and Directories.
Jan 31 05:06:46 np0005603787 systemd[144843]: Removed slice User Application Slice.
Jan 31 05:06:46 np0005603787 systemd[144843]: Reached target Shutdown.
Jan 31 05:06:46 np0005603787 systemd[144843]: Finished Exit the Session.
Jan 31 05:06:46 np0005603787 systemd[144843]: Reached target Exit the Session.
Jan 31 05:06:46 np0005603787 systemd[1]: user@0.service: Deactivated successfully.
Jan 31 05:06:46 np0005603787 systemd[1]: Stopped User Manager for UID 0.
Jan 31 05:06:46 np0005603787 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 31 05:06:46 np0005603787 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 31 05:06:46 np0005603787 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 31 05:06:46 np0005603787 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 31 05:06:46 np0005603787 systemd[1]: Removed slice User Slice of UID 0.
Jan 31 05:06:46 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:06:47 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:06:47 np0005603787 python3.9[145988]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:06:48 np0005603787 python3.9[146144]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:06:48 np0005603787 python3.9[146296]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:06:49 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:06:49 np0005603787 python3.9[146448]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:06:50 np0005603787 python3.9[146600]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:06:50 np0005603787 python3.9[146752]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:06:51 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:06:51 np0005603787 python3.9[146902]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:06:51 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:06:52 np0005603787 python3.9[147054]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 31 05:06:53 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:06:53 np0005603787 python3.9[147204]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:06:53 np0005603787 python3.9[147325]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769854012.8390758-81-224514785248694/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:06:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:06:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:06:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:06:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:06:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:06:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:06:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:06:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:06:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:06:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:06:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:06:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:06:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1786947556520692e-06 of space, bias 4.0, pg target 0.0014144337067824831 quantized to 16 (current 16)
Jan 31 05:06:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:06:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:06:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:06:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:06:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:06:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 05:06:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:06:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:06:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:06:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:06:54 np0005603787 python3.9[147475]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:06:54 np0005603787 python3.9[147597]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769854014.091732-96-246341458816980/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:06:55 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:06:55 np0005603787 python3.9[147749]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 05:06:56 np0005603787 python3.9[147833]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 05:06:56 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:06:57 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:06:58 np0005603787 python3.9[147986]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 05:06:59 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:06:59 np0005603787 python3.9[148139]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:06:59 np0005603787 python3.9[148260]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769854019.1203547-133-170409786474991/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:07:00 np0005603787 python3.9[148410]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:07:01 np0005603787 python3.9[148531]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769854020.1227705-133-46406401798209/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:07:01 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:07:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:07:02 np0005603787 python3.9[148681]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:07:02 np0005603787 python3.9[148802]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769854021.7617655-177-239088862751878/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:07:03 np0005603787 python3.9[148952]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:07:03 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:07:03 np0005603787 python3.9[149123]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769854022.7574346-177-173255588899901/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:07:03 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:07:03 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:07:03 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:07:03 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:07:03 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:07:03 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:07:03 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:07:03 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:07:03 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:07:03 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:07:03 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:07:03 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:07:03 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:07:03 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:07:03 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:07:04 np0005603787 python3.9[149354]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:07:04 np0005603787 podman[149366]: 2026-01-31 10:07:04.124461115 +0000 UTC m=+0.048306176 container create 3f56698f7d3ca9237b1d1eb219f2afdae445f1047658bdae6614adabdaf7df90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_heisenberg, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 05:07:04 np0005603787 systemd[1]: Started libpod-conmon-3f56698f7d3ca9237b1d1eb219f2afdae445f1047658bdae6614adabdaf7df90.scope.
Jan 31 05:07:04 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:07:04 np0005603787 podman[149366]: 2026-01-31 10:07:04.188488959 +0000 UTC m=+0.112334030 container init 3f56698f7d3ca9237b1d1eb219f2afdae445f1047658bdae6614adabdaf7df90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_heisenberg, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 05:07:04 np0005603787 podman[149366]: 2026-01-31 10:07:04.100891403 +0000 UTC m=+0.024736504 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:07:04 np0005603787 podman[149366]: 2026-01-31 10:07:04.1969948 +0000 UTC m=+0.120839861 container start 3f56698f7d3ca9237b1d1eb219f2afdae445f1047658bdae6614adabdaf7df90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:07:04 np0005603787 podman[149366]: 2026-01-31 10:07:04.200610254 +0000 UTC m=+0.124455325 container attach 3f56698f7d3ca9237b1d1eb219f2afdae445f1047658bdae6614adabdaf7df90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 05:07:04 np0005603787 distracted_heisenberg[149407]: 167 167
Jan 31 05:07:04 np0005603787 systemd[1]: libpod-3f56698f7d3ca9237b1d1eb219f2afdae445f1047658bdae6614adabdaf7df90.scope: Deactivated successfully.
Jan 31 05:07:04 np0005603787 podman[149366]: 2026-01-31 10:07:04.203538449 +0000 UTC m=+0.127383500 container died 3f56698f7d3ca9237b1d1eb219f2afdae445f1047658bdae6614adabdaf7df90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_heisenberg, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:07:04 np0005603787 systemd[1]: var-lib-containers-storage-overlay-c4546867f66b708955a0b3a564e4198a12db65fa4eabcf17f1eb4b4ffa2fbc21-merged.mount: Deactivated successfully.
Jan 31 05:07:04 np0005603787 podman[149366]: 2026-01-31 10:07:04.244290069 +0000 UTC m=+0.168135120 container remove 3f56698f7d3ca9237b1d1eb219f2afdae445f1047658bdae6614adabdaf7df90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:07:04 np0005603787 systemd[1]: libpod-conmon-3f56698f7d3ca9237b1d1eb219f2afdae445f1047658bdae6614adabdaf7df90.scope: Deactivated successfully.
Jan 31 05:07:04 np0005603787 podman[149461]: 2026-01-31 10:07:04.360390915 +0000 UTC m=+0.040844292 container create 170548d45033237d84b13691f8fb6ec4b91a83e5b03ef380ce3cc74e9cd97d5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_mclaren, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:07:04 np0005603787 systemd[1]: Started libpod-conmon-170548d45033237d84b13691f8fb6ec4b91a83e5b03ef380ce3cc74e9cd97d5f.scope.
Jan 31 05:07:04 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:07:04 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/065bdbdb56b561aadafe65f9ecb10e1e48f449134b7ccc1a35aaede44efa1e5b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:07:04 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/065bdbdb56b561aadafe65f9ecb10e1e48f449134b7ccc1a35aaede44efa1e5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:07:04 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/065bdbdb56b561aadafe65f9ecb10e1e48f449134b7ccc1a35aaede44efa1e5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:07:04 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/065bdbdb56b561aadafe65f9ecb10e1e48f449134b7ccc1a35aaede44efa1e5b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:07:04 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/065bdbdb56b561aadafe65f9ecb10e1e48f449134b7ccc1a35aaede44efa1e5b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:07:04 np0005603787 podman[149461]: 2026-01-31 10:07:04.341548715 +0000 UTC m=+0.022002102 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:07:04 np0005603787 podman[149461]: 2026-01-31 10:07:04.453291769 +0000 UTC m=+0.133745156 container init 170548d45033237d84b13691f8fb6ec4b91a83e5b03ef380ce3cc74e9cd97d5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_mclaren, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 05:07:04 np0005603787 podman[149461]: 2026-01-31 10:07:04.457292693 +0000 UTC m=+0.137746060 container start 170548d45033237d84b13691f8fb6ec4b91a83e5b03ef380ce3cc74e9cd97d5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_mclaren, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:07:04 np0005603787 podman[149461]: 2026-01-31 10:07:04.461128112 +0000 UTC m=+0.141581479 container attach 170548d45033237d84b13691f8fb6ec4b91a83e5b03ef380ce3cc74e9cd97d5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_mclaren, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 05:07:04 np0005603787 python3.9[149579]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:07:04 np0005603787 charming_mclaren[149522]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:07:04 np0005603787 charming_mclaren[149522]: --> All data devices are unavailable
Jan 31 05:07:04 np0005603787 systemd[1]: libpod-170548d45033237d84b13691f8fb6ec4b91a83e5b03ef380ce3cc74e9cd97d5f.scope: Deactivated successfully.
Jan 31 05:07:04 np0005603787 podman[149461]: 2026-01-31 10:07:04.841663758 +0000 UTC m=+0.522117125 container died 170548d45033237d84b13691f8fb6ec4b91a83e5b03ef380ce3cc74e9cd97d5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_mclaren, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 31 05:07:04 np0005603787 systemd[1]: var-lib-containers-storage-overlay-065bdbdb56b561aadafe65f9ecb10e1e48f449134b7ccc1a35aaede44efa1e5b-merged.mount: Deactivated successfully.
Jan 31 05:07:04 np0005603787 podman[149461]: 2026-01-31 10:07:04.881155934 +0000 UTC m=+0.561609301 container remove 170548d45033237d84b13691f8fb6ec4b91a83e5b03ef380ce3cc74e9cd97d5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_mclaren, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:07:04 np0005603787 systemd[1]: libpod-conmon-170548d45033237d84b13691f8fb6ec4b91a83e5b03ef380ce3cc74e9cd97d5f.scope: Deactivated successfully.
Jan 31 05:07:05 np0005603787 podman[149821]: 2026-01-31 10:07:05.23741047 +0000 UTC m=+0.034399695 container create 1ace47eac7574b80a42f678ec69005a51e892d76181b7b4f253e6198bf6f9cc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 05:07:05 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:07:05 np0005603787 systemd[1]: Started libpod-conmon-1ace47eac7574b80a42f678ec69005a51e892d76181b7b4f253e6198bf6f9cc3.scope.
Jan 31 05:07:05 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:07:05 np0005603787 podman[149821]: 2026-01-31 10:07:05.309609575 +0000 UTC m=+0.106598820 container init 1ace47eac7574b80a42f678ec69005a51e892d76181b7b4f253e6198bf6f9cc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_fermat, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 05:07:05 np0005603787 podman[149821]: 2026-01-31 10:07:05.317119271 +0000 UTC m=+0.114108486 container start 1ace47eac7574b80a42f678ec69005a51e892d76181b7b4f253e6198bf6f9cc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_fermat, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:07:05 np0005603787 podman[149821]: 2026-01-31 10:07:05.221201829 +0000 UTC m=+0.018191044 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:07:05 np0005603787 podman[149821]: 2026-01-31 10:07:05.32091455 +0000 UTC m=+0.117903815 container attach 1ace47eac7574b80a42f678ec69005a51e892d76181b7b4f253e6198bf6f9cc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:07:05 np0005603787 heuristic_fermat[149837]: 167 167
Jan 31 05:07:05 np0005603787 systemd[1]: libpod-1ace47eac7574b80a42f678ec69005a51e892d76181b7b4f253e6198bf6f9cc3.scope: Deactivated successfully.
Jan 31 05:07:05 np0005603787 conmon[149837]: conmon 1ace47eac7574b80a42f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1ace47eac7574b80a42f678ec69005a51e892d76181b7b4f253e6198bf6f9cc3.scope/container/memory.events
Jan 31 05:07:05 np0005603787 podman[149821]: 2026-01-31 10:07:05.32287783 +0000 UTC m=+0.119867085 container died 1ace47eac7574b80a42f678ec69005a51e892d76181b7b4f253e6198bf6f9cc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:07:05 np0005603787 systemd[1]: var-lib-containers-storage-overlay-15e406a8ee32b5f7dbaf0f30b8ccc2510e602ec45150b215a92ec4e0b2962051-merged.mount: Deactivated successfully.
Jan 31 05:07:05 np0005603787 python3.9[149808]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:07:05 np0005603787 podman[149821]: 2026-01-31 10:07:05.371825772 +0000 UTC m=+0.168814997 container remove 1ace47eac7574b80a42f678ec69005a51e892d76181b7b4f253e6198bf6f9cc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:07:05 np0005603787 systemd[1]: libpod-conmon-1ace47eac7574b80a42f678ec69005a51e892d76181b7b4f253e6198bf6f9cc3.scope: Deactivated successfully.
Jan 31 05:07:05 np0005603787 podman[149871]: 2026-01-31 10:07:05.493231317 +0000 UTC m=+0.039773135 container create b2fa51ef01eda5b001925bd672020812136f5e6f1ee62eeb5efdbc01d5bee20d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_bartik, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 05:07:05 np0005603787 systemd[1]: Started libpod-conmon-b2fa51ef01eda5b001925bd672020812136f5e6f1ee62eeb5efdbc01d5bee20d.scope.
Jan 31 05:07:05 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:07:05 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8d8d71590da04d44a665bff7d234d45cd6689412475a71adcdc370b9f0584dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:07:05 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8d8d71590da04d44a665bff7d234d45cd6689412475a71adcdc370b9f0584dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:07:05 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8d8d71590da04d44a665bff7d234d45cd6689412475a71adcdc370b9f0584dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:07:05 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8d8d71590da04d44a665bff7d234d45cd6689412475a71adcdc370b9f0584dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:07:05 np0005603787 podman[149871]: 2026-01-31 10:07:05.548915412 +0000 UTC m=+0.095457260 container init b2fa51ef01eda5b001925bd672020812136f5e6f1ee62eeb5efdbc01d5bee20d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 05:07:05 np0005603787 podman[149871]: 2026-01-31 10:07:05.556691335 +0000 UTC m=+0.103233173 container start b2fa51ef01eda5b001925bd672020812136f5e6f1ee62eeb5efdbc01d5bee20d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_bartik, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:07:05 np0005603787 podman[149871]: 2026-01-31 10:07:05.561297234 +0000 UTC m=+0.107839052 container attach b2fa51ef01eda5b001925bd672020812136f5e6f1ee62eeb5efdbc01d5bee20d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_bartik, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:07:05 np0005603787 podman[149871]: 2026-01-31 10:07:05.476690117 +0000 UTC m=+0.023231955 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]: {
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:    "0": [
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:        {
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:            "devices": [
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "/dev/loop3"
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:            ],
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:            "lv_name": "ceph_lv0",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:            "lv_size": "21470642176",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:            "name": "ceph_lv0",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:            "tags": {
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "ceph.cluster_name": "ceph",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "ceph.crush_device_class": "",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "ceph.encrypted": "0",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "ceph.objectstore": "bluestore",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "ceph.osd_id": "0",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "ceph.type": "block",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "ceph.vdo": "0",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "ceph.with_tpm": "0"
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:            },
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:            "type": "block",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:            "vg_name": "ceph_vg0"
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:        }
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:    ],
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:    "1": [
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:        {
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:            "devices": [
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "/dev/loop4"
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:            ],
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:            "lv_name": "ceph_lv1",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:            "lv_size": "21470642176",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:            "name": "ceph_lv1",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:            "tags": {
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "ceph.cluster_name": "ceph",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "ceph.crush_device_class": "",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "ceph.encrypted": "0",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "ceph.objectstore": "bluestore",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "ceph.osd_id": "1",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "ceph.type": "block",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "ceph.vdo": "0",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "ceph.with_tpm": "0"
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:            },
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:            "type": "block",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:            "vg_name": "ceph_vg1"
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:        }
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:    ],
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:    "2": [
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:        {
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:            "devices": [
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "/dev/loop5"
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:            ],
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:            "lv_name": "ceph_lv2",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:            "lv_size": "21470642176",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:            "name": "ceph_lv2",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:            "tags": {
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "ceph.cluster_name": "ceph",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "ceph.crush_device_class": "",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "ceph.encrypted": "0",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "ceph.objectstore": "bluestore",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "ceph.osd_id": "2",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "ceph.type": "block",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "ceph.vdo": "0",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:                "ceph.with_tpm": "0"
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:            },
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:            "type": "block",
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:            "vg_name": "ceph_vg2"
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:        }
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]:    ]
Jan 31 05:07:05 np0005603787 xenodochial_bartik[149921]: }
Jan 31 05:07:05 np0005603787 python3.9[149960]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:07:05 np0005603787 systemd[1]: libpod-b2fa51ef01eda5b001925bd672020812136f5e6f1ee62eeb5efdbc01d5bee20d.scope: Deactivated successfully.
Jan 31 05:07:05 np0005603787 podman[149871]: 2026-01-31 10:07:05.833420255 +0000 UTC m=+0.379962143 container died b2fa51ef01eda5b001925bd672020812136f5e6f1ee62eeb5efdbc01d5bee20d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:07:05 np0005603787 systemd[1]: var-lib-containers-storage-overlay-c8d8d71590da04d44a665bff7d234d45cd6689412475a71adcdc370b9f0584dc-merged.mount: Deactivated successfully.
Jan 31 05:07:05 np0005603787 podman[149871]: 2026-01-31 10:07:05.88022905 +0000 UTC m=+0.426770868 container remove b2fa51ef01eda5b001925bd672020812136f5e6f1ee62eeb5efdbc01d5bee20d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_bartik, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 05:07:05 np0005603787 systemd[1]: libpod-conmon-b2fa51ef01eda5b001925bd672020812136f5e6f1ee62eeb5efdbc01d5bee20d.scope: Deactivated successfully.
Jan 31 05:07:06 np0005603787 ovn_controller[144805]: 2026-01-31T10:07:06Z|00025|memory|INFO|15744 kB peak resident set size after 29.7 seconds
Jan 31 05:07:06 np0005603787 ovn_controller[144805]: 2026-01-31T10:07:06Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Jan 31 05:07:06 np0005603787 podman[150150]: 2026-01-31 10:07:06.243403355 +0000 UTC m=+0.084792373 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 05:07:06 np0005603787 podman[150208]: 2026-01-31 10:07:06.291848484 +0000 UTC m=+0.072112675 container create 50ea80febe97e2c04c2267be77b24df7110d9a6466decb4d4eb01b2ffef7c762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_lalande, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 05:07:06 np0005603787 systemd[1]: Started libpod-conmon-50ea80febe97e2c04c2267be77b24df7110d9a6466decb4d4eb01b2ffef7c762.scope.
Jan 31 05:07:06 np0005603787 podman[150208]: 2026-01-31 10:07:06.24203991 +0000 UTC m=+0.022304121 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:07:06 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:07:06 np0005603787 podman[150208]: 2026-01-31 10:07:06.368992508 +0000 UTC m=+0.149256729 container init 50ea80febe97e2c04c2267be77b24df7110d9a6466decb4d4eb01b2ffef7c762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_lalande, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 05:07:06 np0005603787 podman[150208]: 2026-01-31 10:07:06.378853835 +0000 UTC m=+0.159118016 container start 50ea80febe97e2c04c2267be77b24df7110d9a6466decb4d4eb01b2ffef7c762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_lalande, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:07:06 np0005603787 pedantic_lalande[150230]: 167 167
Jan 31 05:07:06 np0005603787 systemd[1]: libpod-50ea80febe97e2c04c2267be77b24df7110d9a6466decb4d4eb01b2ffef7c762.scope: Deactivated successfully.
Jan 31 05:07:06 np0005603787 conmon[150230]: conmon 50ea80febe97e2c04c22 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-50ea80febe97e2c04c2267be77b24df7110d9a6466decb4d4eb01b2ffef7c762.scope/container/memory.events
Jan 31 05:07:06 np0005603787 podman[150208]: 2026-01-31 10:07:06.38522141 +0000 UTC m=+0.165485631 container attach 50ea80febe97e2c04c2267be77b24df7110d9a6466decb4d4eb01b2ffef7c762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:07:06 np0005603787 podman[150208]: 2026-01-31 10:07:06.385688572 +0000 UTC m=+0.165952763 container died 50ea80febe97e2c04c2267be77b24df7110d9a6466decb4d4eb01b2ffef7c762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_lalande, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:07:06 np0005603787 systemd[1]: var-lib-containers-storage-overlay-e9f1f98d6874461b87b1d417f39b6afc51ef9ce570f9a7a2e961878ab929dc18-merged.mount: Deactivated successfully.
Jan 31 05:07:06 np0005603787 podman[150208]: 2026-01-31 10:07:06.416928713 +0000 UTC m=+0.197192905 container remove 50ea80febe97e2c04c2267be77b24df7110d9a6466decb4d4eb01b2ffef7c762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 05:07:06 np0005603787 python3.9[150206]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:07:06 np0005603787 systemd[1]: libpod-conmon-50ea80febe97e2c04c2267be77b24df7110d9a6466decb4d4eb01b2ffef7c762.scope: Deactivated successfully.
Jan 31 05:07:06 np0005603787 podman[150261]: 2026-01-31 10:07:06.533006549 +0000 UTC m=+0.039009124 container create c0d77dd3b53b6f98dffa035b6a8ea1fc59f3a8c9a59936decc13a7df3a812334 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_mclean, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 05:07:06 np0005603787 systemd[1]: Started libpod-conmon-c0d77dd3b53b6f98dffa035b6a8ea1fc59f3a8c9a59936decc13a7df3a812334.scope.
Jan 31 05:07:06 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:07:06 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d16fd08de890d08f6bb76dcfe476e0c56232eb2a4910cbc2686b9bdb02ebf0ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:07:06 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d16fd08de890d08f6bb76dcfe476e0c56232eb2a4910cbc2686b9bdb02ebf0ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:07:06 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d16fd08de890d08f6bb76dcfe476e0c56232eb2a4910cbc2686b9bdb02ebf0ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:07:06 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d16fd08de890d08f6bb76dcfe476e0c56232eb2a4910cbc2686b9bdb02ebf0ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:07:06 np0005603787 podman[150261]: 2026-01-31 10:07:06.607888045 +0000 UTC m=+0.113890650 container init c0d77dd3b53b6f98dffa035b6a8ea1fc59f3a8c9a59936decc13a7df3a812334 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_mclean, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 05:07:06 np0005603787 podman[150261]: 2026-01-31 10:07:06.515799963 +0000 UTC m=+0.021802578 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:07:06 np0005603787 podman[150261]: 2026-01-31 10:07:06.613907401 +0000 UTC m=+0.119909976 container start c0d77dd3b53b6f98dffa035b6a8ea1fc59f3a8c9a59936decc13a7df3a812334 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 05:07:06 np0005603787 podman[150261]: 2026-01-31 10:07:06.618423488 +0000 UTC m=+0.124426093 container attach c0d77dd3b53b6f98dffa035b6a8ea1fc59f3a8c9a59936decc13a7df3a812334 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_mclean, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 05:07:06 np0005603787 python3.9[150354]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:07:06 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:07:07 np0005603787 lvm[150573]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:07:07 np0005603787 lvm[150573]: VG ceph_vg1 finished
Jan 31 05:07:07 np0005603787 lvm[150560]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:07:07 np0005603787 lvm[150560]: VG ceph_vg0 finished
Jan 31 05:07:07 np0005603787 lvm[150582]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:07:07 np0005603787 lvm[150582]: VG ceph_vg2 finished
Jan 31 05:07:07 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:07:07 np0005603787 condescending_mclean[150321]: {}
Jan 31 05:07:07 np0005603787 systemd[1]: libpod-c0d77dd3b53b6f98dffa035b6a8ea1fc59f3a8c9a59936decc13a7df3a812334.scope: Deactivated successfully.
Jan 31 05:07:07 np0005603787 podman[150261]: 2026-01-31 10:07:07.283636131 +0000 UTC m=+0.789638736 container died c0d77dd3b53b6f98dffa035b6a8ea1fc59f3a8c9a59936decc13a7df3a812334 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:07:07 np0005603787 systemd[1]: var-lib-containers-storage-overlay-d16fd08de890d08f6bb76dcfe476e0c56232eb2a4910cbc2686b9bdb02ebf0ee-merged.mount: Deactivated successfully.
Jan 31 05:07:07 np0005603787 podman[150261]: 2026-01-31 10:07:07.326679809 +0000 UTC m=+0.832682394 container remove c0d77dd3b53b6f98dffa035b6a8ea1fc59f3a8c9a59936decc13a7df3a812334 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_mclean, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 05:07:07 np0005603787 systemd[1]: libpod-conmon-c0d77dd3b53b6f98dffa035b6a8ea1fc59f3a8c9a59936decc13a7df3a812334.scope: Deactivated successfully.
Jan 31 05:07:07 np0005603787 python3.9[150583]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:07:07 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:07:07 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:07:07 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:07:07 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:07:07 np0005603787 python3.9[150775]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:07:08 np0005603787 python3.9[150853]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:07:08 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:07:08 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:07:08 np0005603787 python3.9[151005]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:07:09 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:07:09 np0005603787 python3.9[151083]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:07:10 np0005603787 python3.9[151235]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:07:10 np0005603787 systemd[1]: Reloading.
Jan 31 05:07:10 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:07:10 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:07:10 np0005603787 python3.9[151424]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:07:11 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:07:11 np0005603787 python3.9[151502]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:07:11 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:07:12 np0005603787 python3.9[151654]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:07:12 np0005603787 python3.9[151732]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:07:13 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:07:13 np0005603787 python3.9[151884]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:07:13 np0005603787 systemd[1]: Reloading.
Jan 31 05:07:13 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:07:13 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:07:13 np0005603787 systemd[1]: Starting Create netns directory...
Jan 31 05:07:13 np0005603787 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 05:07:13 np0005603787 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 05:07:13 np0005603787 systemd[1]: Finished Create netns directory.
Jan 31 05:07:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:07:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:07:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:07:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:07:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:07:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:07:14 np0005603787 python3.9[152078]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:07:14 np0005603787 python3.9[152230]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:07:15 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:07:15 np0005603787 python3.9[152353]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769854034.419797-328-277556846660554/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:07:16 np0005603787 python3.9[152505]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:07:16 np0005603787 python3.9[152657]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:07:16 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:07:17 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:07:17 np0005603787 python3.9[152809]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:07:17 np0005603787 python3.9[152932]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769854036.8510807-361-90064693848583/.source.json _original_basename=.xvn1qai2 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:07:18 np0005603787 python3.9[153082]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:07:19 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:07:20 np0005603787 python3.9[153505]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 31 05:07:21 np0005603787 python3.9[153657]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 05:07:21 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 05:07:21 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2077 writes, 9242 keys, 2077 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 2077 writes, 2077 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2077 writes, 9242 keys, 2077 commit groups, 1.0 writes per commit group, ingest: 12.28 MB, 0.02 MB/s#012Interval WAL: 2077 writes, 2077 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    147.5      0.06              0.02         3    0.020       0      0       0.0       0.0#012  L6      1/0    6.83 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    178.2    156.3      0.09              0.03         2    0.046    7255    731       0.0       0.0#012 Sum      1/0    6.83 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    107.8    152.8      0.15              0.04         5    0.030    7255    731       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    110.4    156.2      0.15              0.04         4    0.037    7255    731       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    178.2    156.3      0.09              0.03         2    0.046    7255    731       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    155.9      0.06              0.02         2    0.028       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     15.9      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.009, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.2 seconds#012Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b1fd4298d0#2 capacity: 308.00 MB usage: 711.05 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(38,619.27 KB,0.196348%) FilterBlock(6,28.61 KB,0.00907105%) IndexBlock(6,63.17 KB,0.0200296%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 05:07:21 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:07:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:07:22 np0005603787 python3[153809]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 05:07:23 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:07:25 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:07:26 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:07:27 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:07:29 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:07:29 np0005603787 podman[153822]: 2026-01-31 10:07:29.991587104 +0000 UTC m=+7.786538206 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 05:07:30 np0005603787 podman[153943]: 2026-01-31 10:07:30.096682935 +0000 UTC m=+0.023827260 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 05:07:30 np0005603787 podman[153943]: 2026-01-31 10:07:30.83400835 +0000 UTC m=+0.761152625 container create e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 05:07:30 np0005603787 python3[153809]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 05:07:31 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:07:31 np0005603787 python3.9[154135]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:07:31 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:07:32 np0005603787 python3.9[154289]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:07:32 np0005603787 python3.9[154365]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:07:33 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:07:33 np0005603787 python3.9[154516]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769854052.7604544-439-75082820518642/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:07:33 np0005603787 python3.9[154592]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 05:07:33 np0005603787 systemd[1]: Reloading.
Jan 31 05:07:33 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:07:33 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:07:34 np0005603787 python3.9[154702]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:07:34 np0005603787 systemd[1]: Reloading.
Jan 31 05:07:34 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:07:34 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:07:34 np0005603787 systemd[1]: Starting ovn_metadata_agent container...
Jan 31 05:07:35 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:07:35 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f95a0724af0ecce1b5bc4c3e2a2b6a34d7d36a3c545e67e990a239e193a70d6/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 31 05:07:35 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f95a0724af0ecce1b5bc4c3e2a2b6a34d7d36a3c545e67e990a239e193a70d6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 05:07:35 np0005603787 systemd[1]: Started /usr/bin/podman healthcheck run e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f.
Jan 31 05:07:35 np0005603787 podman[154744]: 2026-01-31 10:07:35.08190581 +0000 UTC m=+0.131041054 container init e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 05:07:35 np0005603787 ovn_metadata_agent[154760]: + sudo -E kolla_set_configs
Jan 31 05:07:35 np0005603787 podman[154744]: 2026-01-31 10:07:35.112128386 +0000 UTC m=+0.161263600 container start e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 05:07:35 np0005603787 edpm-start-podman-container[154744]: ovn_metadata_agent
Jan 31 05:07:35 np0005603787 edpm-start-podman-container[154743]: Creating additional drop-in dependency for "ovn_metadata_agent" (e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f)
Jan 31 05:07:35 np0005603787 podman[154766]: 2026-01-31 10:07:35.173828829 +0000 UTC m=+0.054851726 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 05:07:35 np0005603787 systemd[1]: Reloading.
Jan 31 05:07:35 np0005603787 ovn_metadata_agent[154760]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 05:07:35 np0005603787 ovn_metadata_agent[154760]: INFO:__main__:Validating config file
Jan 31 05:07:35 np0005603787 ovn_metadata_agent[154760]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 05:07:35 np0005603787 ovn_metadata_agent[154760]: INFO:__main__:Copying service configuration files
Jan 31 05:07:35 np0005603787 ovn_metadata_agent[154760]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 31 05:07:35 np0005603787 ovn_metadata_agent[154760]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 31 05:07:35 np0005603787 ovn_metadata_agent[154760]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 31 05:07:35 np0005603787 ovn_metadata_agent[154760]: INFO:__main__:Writing out command to execute
Jan 31 05:07:35 np0005603787 ovn_metadata_agent[154760]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 31 05:07:35 np0005603787 ovn_metadata_agent[154760]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 31 05:07:35 np0005603787 ovn_metadata_agent[154760]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 31 05:07:35 np0005603787 ovn_metadata_agent[154760]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 31 05:07:35 np0005603787 ovn_metadata_agent[154760]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 31 05:07:35 np0005603787 ovn_metadata_agent[154760]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 31 05:07:35 np0005603787 ovn_metadata_agent[154760]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 31 05:07:35 np0005603787 ovn_metadata_agent[154760]: ++ cat /run_command
Jan 31 05:07:35 np0005603787 ovn_metadata_agent[154760]: + CMD=neutron-ovn-metadata-agent
Jan 31 05:07:35 np0005603787 ovn_metadata_agent[154760]: + ARGS=
Jan 31 05:07:35 np0005603787 ovn_metadata_agent[154760]: + sudo kolla_copy_cacerts
Jan 31 05:07:35 np0005603787 ovn_metadata_agent[154760]: + [[ ! -n '' ]]
Jan 31 05:07:35 np0005603787 ovn_metadata_agent[154760]: + . kolla_extend_start
Jan 31 05:07:35 np0005603787 ovn_metadata_agent[154760]: Running command: 'neutron-ovn-metadata-agent'
Jan 31 05:07:35 np0005603787 ovn_metadata_agent[154760]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 31 05:07:35 np0005603787 ovn_metadata_agent[154760]: + umask 0022
Jan 31 05:07:35 np0005603787 ovn_metadata_agent[154760]: + exec neutron-ovn-metadata-agent
Jan 31 05:07:35 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:07:35 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:07:35 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:07:35 np0005603787 systemd[1]: Started ovn_metadata_agent container.
Jan 31 05:07:36 np0005603787 python3.9[154994]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 31 05:07:36 np0005603787 podman[155114]: 2026-01-31 10:07:36.850712074 +0000 UTC m=+0.070785830 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 31 05:07:36 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:07:36 np0005603787 python3.9[155166]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.005 154765 INFO neutron.common.config [-] Logging enabled!#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.006 154765 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev44#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.006 154765 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.007 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.007 154765 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.007 154765 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.007 154765 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.007 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.007 154765 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.008 154765 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.008 154765 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.008 154765 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.008 154765 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.008 154765 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.008 154765 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.008 154765 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.008 154765 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.009 154765 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.009 154765 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.009 154765 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.009 154765 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.009 154765 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.009 154765 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.009 154765 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.009 154765 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.009 154765 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.010 154765 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.010 154765 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.010 154765 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.010 154765 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.010 154765 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.010 154765 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.010 154765 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.010 154765 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.010 154765 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.011 154765 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.011 154765 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.011 154765 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.011 154765 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.012 154765 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.012 154765 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.012 154765 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.012 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.012 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.012 154765 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.012 154765 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.013 154765 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.013 154765 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.013 154765 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.013 154765 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.013 154765 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.013 154765 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.013 154765 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.013 154765 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.013 154765 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.014 154765 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.014 154765 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.014 154765 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.014 154765 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.014 154765 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.014 154765 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.014 154765 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.014 154765 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.014 154765 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.015 154765 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.015 154765 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.015 154765 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.015 154765 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.015 154765 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.015 154765 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.015 154765 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.015 154765 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.015 154765 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.016 154765 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.016 154765 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.016 154765 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.016 154765 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.016 154765 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.016 154765 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.016 154765 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.016 154765 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.016 154765 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.017 154765 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.017 154765 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.017 154765 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.017 154765 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.017 154765 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.017 154765 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.017 154765 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.017 154765 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.017 154765 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.018 154765 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.018 154765 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.018 154765 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.018 154765 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.018 154765 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.018 154765 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.018 154765 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.018 154765 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.018 154765 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.019 154765 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.019 154765 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.019 154765 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.019 154765 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.019 154765 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.019 154765 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.019 154765 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.019 154765 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.020 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.020 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.020 154765 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.020 154765 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.020 154765 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.020 154765 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.020 154765 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.020 154765 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.020 154765 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.021 154765 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.021 154765 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.021 154765 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.021 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.021 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.021 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.021 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.022 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.022 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.022 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.022 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.022 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.023 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.023 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.023 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.023 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.023 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.023 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.024 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.024 154765 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.024 154765 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.024 154765 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.024 154765 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.024 154765 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.024 154765 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.024 154765 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.025 154765 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.025 154765 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.025 154765 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.025 154765 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.025 154765 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.025 154765 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.025 154765 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.025 154765 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.026 154765 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.026 154765 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.026 154765 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.026 154765 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.026 154765 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.026 154765 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.026 154765 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.026 154765 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.026 154765 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.027 154765 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.027 154765 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.027 154765 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.027 154765 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.027 154765 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.027 154765 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.028 154765 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.028 154765 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.028 154765 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.028 154765 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.028 154765 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.028 154765 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.028 154765 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.028 154765 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.029 154765 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.029 154765 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.029 154765 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.029 154765 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.029 154765 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.029 154765 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.029 154765 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.029 154765 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.029 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.030 154765 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.030 154765 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.030 154765 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.030 154765 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.030 154765 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.030 154765 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.030 154765 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.030 154765 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.031 154765 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.031 154765 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.031 154765 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.031 154765 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.031 154765 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.031 154765 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.031 154765 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.031 154765 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.031 154765 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.032 154765 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.032 154765 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.032 154765 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.032 154765 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.032 154765 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.032 154765 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.032 154765 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.032 154765 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.033 154765 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.033 154765 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.033 154765 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.033 154765 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.033 154765 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.033 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.033 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.033 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.034 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.034 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.034 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.034 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.034 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.034 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.034 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.034 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.035 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.035 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.035 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.035 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.035 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.035 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.035 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.035 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.035 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.036 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.036 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.036 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.036 154765 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.036 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.036 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.036 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.036 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.036 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.037 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.037 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.037 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.037 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.037 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.037 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.037 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.038 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.038 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.038 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.038 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.038 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.038 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.038 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.038 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.039 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.039 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.039 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.039 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.039 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.039 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.039 154765 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.040 154765 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.040 154765 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.040 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.040 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.040 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.040 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.040 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.041 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.041 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.041 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.041 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.041 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.041 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.041 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.042 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.042 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.042 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.042 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.042 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.042 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.042 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.042 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.043 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.043 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.043 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.043 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.043 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.043 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.043 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.043 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.044 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.044 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.044 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.044 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.044 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.044 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.044 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.045 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.045 154765 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.045 154765 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.053 154765 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.053 154765 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.054 154765 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.054 154765 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.054 154765 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.066 154765 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name ef41023c-ae05-4c9a-b1cb-d6bd86d05fb4 (UUID: ef41023c-ae05-4c9a-b1cb-d6bd86d05fb4) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.093 154765 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.093 154765 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.093 154765 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.093 154765 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.096 154765 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.101 154765 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.107 154765 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'ef41023c-ae05-4c9a-b1cb-d6bd86d05fb4'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fe063d16b80>], external_ids={}, name=ef41023c-ae05-4c9a-b1cb-d6bd86d05fb4, nb_cfg_timestamp=1769854004513, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.108 154765 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fe063c98c10>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.108 154765 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.109 154765 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.109 154765 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.109 154765 INFO oslo_service.service [-] Starting 1 workers#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.113 154765 DEBUG oslo_service.service [-] Started child 155222 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.115 155222 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-235497'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.116 154765 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpwtaypsp2/privsep.sock']#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.138 155222 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.138 155222 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.138 155222 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.141 155222 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.147 155222 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.156 155222 INFO eventlet.wsgi.server [-] (155222) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Jan 31 05:07:37 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:07:37 np0005603787 python3.9[155301]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769854056.6030438-484-270617169585284/.source.yaml _original_basename=.o4ixhko5 follow=False checksum=63ed683a3c3959006a7ced0e393c2cf4f67a4cb3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:07:37 np0005603787 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.727 154765 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.727 154765 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpwtaypsp2/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.615 155327 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.618 155327 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.620 155327 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.620 155327 INFO oslo.privsep.daemon [-] privsep daemon running as pid 155327#033[00m
Jan 31 05:07:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:37.729 155327 DEBUG oslo.privsep.daemon [-] privsep: reply[59d890e2-1657-4f53-a831-a89bc5a91fac]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 05:07:37 np0005603787 systemd[1]: session-48.scope: Deactivated successfully.
Jan 31 05:07:37 np0005603787 systemd[1]: session-48.scope: Consumed 47.008s CPU time.
Jan 31 05:07:37 np0005603787 systemd-logind[786]: Session 48 logged out. Waiting for processes to exit.
Jan 31 05:07:37 np0005603787 systemd-logind[786]: Removed session 48.
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.184 155327 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.185 155327 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.185 155327 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.649 155327 DEBUG oslo.privsep.daemon [-] privsep: reply[f25f9e02-1a77-4e3b-893e-da4ca46a2be4]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.651 154765 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=ef41023c-ae05-4c9a-b1cb-d6bd86d05fb4, column=external_ids, values=({'neutron:ovn-metadata-id': '68f0116a-cf5c-5e73-8b10-a1fe950f367e'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.659 154765 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ef41023c-ae05-4c9a-b1cb-d6bd86d05fb4, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.665 154765 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.665 154765 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.665 154765 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.665 154765 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.665 154765 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.665 154765 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.666 154765 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.666 154765 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.666 154765 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.666 154765 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.666 154765 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.666 154765 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.666 154765 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.666 154765 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.667 154765 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.667 154765 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.667 154765 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.667 154765 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.667 154765 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.667 154765 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.667 154765 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.668 154765 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.668 154765 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.668 154765 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.668 154765 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.668 154765 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.668 154765 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.668 154765 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.668 154765 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.669 154765 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.669 154765 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.669 154765 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.669 154765 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.669 154765 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.669 154765 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.669 154765 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.669 154765 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.670 154765 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.670 154765 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.670 154765 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.670 154765 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.670 154765 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.670 154765 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.671 154765 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.671 154765 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.671 154765 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.671 154765 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.671 154765 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.671 154765 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.671 154765 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.671 154765 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.671 154765 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.672 154765 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.672 154765 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.672 154765 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.672 154765 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.672 154765 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.672 154765 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.672 154765 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.672 154765 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.672 154765 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.672 154765 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.673 154765 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.673 154765 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.673 154765 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.673 154765 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.673 154765 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.673 154765 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.673 154765 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.673 154765 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.674 154765 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.674 154765 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.674 154765 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.674 154765 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.674 154765 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.674 154765 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.674 154765 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.674 154765 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.674 154765 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.675 154765 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.675 154765 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.675 154765 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.675 154765 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.675 154765 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.675 154765 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.675 154765 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.676 154765 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.676 154765 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.676 154765 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.676 154765 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.676 154765 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.676 154765 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.676 154765 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.677 154765 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.677 154765 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.677 154765 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.677 154765 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.677 154765 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.677 154765 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.677 154765 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.677 154765 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.678 154765 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.678 154765 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.678 154765 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.678 154765 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.678 154765 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.678 154765 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.678 154765 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.678 154765 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.679 154765 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.679 154765 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.679 154765 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.679 154765 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.679 154765 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.679 154765 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.679 154765 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.679 154765 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.680 154765 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.680 154765 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.680 154765 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.680 154765 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.680 154765 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.680 154765 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.681 154765 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.681 154765 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.681 154765 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.681 154765 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.681 154765 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.681 154765 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.681 154765 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.681 154765 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.682 154765 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.682 154765 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.682 154765 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.682 154765 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.682 154765 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.682 154765 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.683 154765 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.683 154765 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.683 154765 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.683 154765 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.683 154765 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.683 154765 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.683 154765 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.683 154765 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.684 154765 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.684 154765 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.684 154765 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.684 154765 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.684 154765 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.684 154765 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.684 154765 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.685 154765 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.685 154765 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.685 154765 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.685 154765 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.685 154765 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.685 154765 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.685 154765 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.685 154765 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.686 154765 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.686 154765 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.686 154765 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.686 154765 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.686 154765 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.686 154765 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.686 154765 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.687 154765 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.687 154765 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.687 154765 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.687 154765 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.687 154765 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.687 154765 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.687 154765 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.687 154765 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.688 154765 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.688 154765 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.688 154765 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.688 154765 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.688 154765 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.688 154765 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.688 154765 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.689 154765 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.689 154765 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.689 154765 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.689 154765 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.689 154765 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.689 154765 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.689 154765 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.689 154765 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.689 154765 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.690 154765 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.690 154765 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.690 154765 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.690 154765 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.690 154765 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.690 154765 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.690 154765 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.690 154765 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.690 154765 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.691 154765 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.691 154765 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.691 154765 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.691 154765 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.691 154765 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.691 154765 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.691 154765 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.691 154765 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.691 154765 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.691 154765 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.692 154765 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.692 154765 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.692 154765 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.692 154765 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.692 154765 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.692 154765 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.692 154765 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.692 154765 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.692 154765 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.692 154765 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.693 154765 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.693 154765 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.693 154765 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.693 154765 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.693 154765 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.693 154765 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.693 154765 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.693 154765 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.693 154765 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.693 154765 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.694 154765 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.694 154765 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.694 154765 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.694 154765 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.694 154765 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.694 154765 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.694 154765 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.694 154765 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.694 154765 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.695 154765 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.695 154765 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.695 154765 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.695 154765 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.695 154765 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.695 154765 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.695 154765 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.695 154765 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.696 154765 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.696 154765 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.696 154765 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.696 154765 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.696 154765 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.696 154765 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.696 154765 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.696 154765 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.697 154765 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.697 154765 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.697 154765 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.697 154765 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.697 154765 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.697 154765 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.697 154765 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.697 154765 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.697 154765 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.698 154765 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.698 154765 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.698 154765 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.698 154765 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.698 154765 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.698 154765 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.698 154765 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.698 154765 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.698 154765 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.698 154765 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.699 154765 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.699 154765 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.699 154765 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.699 154765 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.699 154765 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.699 154765 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.699 154765 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.699 154765 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.699 154765 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.700 154765 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.700 154765 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.700 154765 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.700 154765 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.700 154765 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.700 154765 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.700 154765 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.700 154765 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.700 154765 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.700 154765 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.701 154765 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.701 154765 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.701 154765 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.701 154765 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.701 154765 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.701 154765 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.701 154765 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.702 154765 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.702 154765 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.702 154765 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:07:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:07:38.702 154765 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 31 05:07:39 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:07:40 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Jan 31 05:07:40 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:07:40.972255) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 05:07:40 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Jan 31 05:07:40 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769854060972285, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 776, "num_deletes": 251, "total_data_size": 1023380, "memory_usage": 1037288, "flush_reason": "Manual Compaction"}
Jan 31 05:07:40 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Jan 31 05:07:40 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769854060980059, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 1014202, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9006, "largest_seqno": 9781, "table_properties": {"data_size": 1010268, "index_size": 1714, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8437, "raw_average_key_size": 18, "raw_value_size": 1002370, "raw_average_value_size": 2207, "num_data_blocks": 80, "num_entries": 454, "num_filter_entries": 454, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769853992, "oldest_key_time": 1769853992, "file_creation_time": 1769854060, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a9067dea-12fb-43c2-8d5c-dbf66227f0e8", "db_session_id": "EXKALWXQ4I64EWVUMKE5", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:07:40 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 7876 microseconds, and 2160 cpu microseconds.
Jan 31 05:07:40 np0005603787 ceph-mon[75160]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 05:07:40 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:07:40.980130) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 1014202 bytes OK
Jan 31 05:07:40 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:07:40.980148) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Jan 31 05:07:40 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:07:40.996964) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Jan 31 05:07:40 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:07:40.996996) EVENT_LOG_v1 {"time_micros": 1769854060996989, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 05:07:40 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:07:40.997017) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 05:07:40 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1019473, prev total WAL file size 1019473, number of live WAL files 2.
Jan 31 05:07:40 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:07:40 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:07:40.997474) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Jan 31 05:07:40 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 05:07:40 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(990KB)], [23(6998KB)]
Jan 31 05:07:40 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769854060997504, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 8180963, "oldest_snapshot_seqno": -1}
Jan 31 05:07:41 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3351 keys, 6302572 bytes, temperature: kUnknown
Jan 31 05:07:41 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769854061050152, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6302572, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6278207, "index_size": 14912, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8389, "raw_key_size": 81197, "raw_average_key_size": 24, "raw_value_size": 6215653, "raw_average_value_size": 1854, "num_data_blocks": 649, "num_entries": 3351, "num_filter_entries": 3351, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769853439, "oldest_key_time": 0, "file_creation_time": 1769854060, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a9067dea-12fb-43c2-8d5c-dbf66227f0e8", "db_session_id": "EXKALWXQ4I64EWVUMKE5", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:07:41 np0005603787 ceph-mon[75160]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 05:07:41 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:07:41.050396) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6302572 bytes
Jan 31 05:07:41 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:07:41.054717) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.1 rd, 119.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 6.8 +0.0 blob) out(6.0 +0.0 blob), read-write-amplify(14.3) write-amplify(6.2) OK, records in: 3865, records dropped: 514 output_compression: NoCompression
Jan 31 05:07:41 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:07:41.054739) EVENT_LOG_v1 {"time_micros": 1769854061054730, "job": 8, "event": "compaction_finished", "compaction_time_micros": 52742, "compaction_time_cpu_micros": 10458, "output_level": 6, "num_output_files": 1, "total_output_size": 6302572, "num_input_records": 3865, "num_output_records": 3351, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 05:07:41 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:07:41 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769854061054941, "job": 8, "event": "table_file_deletion", "file_number": 25}
Jan 31 05:07:41 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:07:41 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769854061055692, "job": 8, "event": "table_file_deletion", "file_number": 23}
Jan 31 05:07:41 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:07:40.997384) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:07:41 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:07:41.055820) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:07:41 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:07:41.055830) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:07:41 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:07:41.055832) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:07:41 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:07:41.055834) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:07:41 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:07:41.055837) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:07:41 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:07:42 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:07:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_10:07:43
Jan 31 05:07:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:07:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] do_upmap
Jan 31 05:07:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'backups', 'cephfs.cephfs.data', 'images', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', 'vms']
Jan 31 05:07:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:07:43 np0005603787 systemd-logind[786]: New session 49 of user zuul.
Jan 31 05:07:43 np0005603787 systemd[1]: Started Session 49 of User zuul.
Jan 31 05:07:43 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:07:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:07:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:07:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:07:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:07:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:07:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:07:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:07:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:07:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:07:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:07:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:07:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:07:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:07:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:07:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:07:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:07:44 np0005603787 python3.9[155485]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:07:45 np0005603787 python3.9[155641]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:07:45 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:07:46 np0005603787 python3.9[155805]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 05:07:46 np0005603787 systemd[1]: Reloading.
Jan 31 05:07:46 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:07:46 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:07:47 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:07:47 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:07:47 np0005603787 python3.9[155990]: ansible-ansible.builtin.service_facts Invoked
Jan 31 05:07:47 np0005603787 network[156007]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 05:07:47 np0005603787 network[156008]: 'network-scripts' will be removed from distribution in near future.
Jan 31 05:07:47 np0005603787 network[156009]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 05:07:49 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:07:51 np0005603787 python3.9[156272]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:07:51 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:07:51 np0005603787 python3.9[156425]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:07:52 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:07:52 np0005603787 python3.9[156578]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:07:53 np0005603787 python3.9[156731]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:07:53 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:07:53 np0005603787 python3.9[156884]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:07:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:07:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:07:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:07:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:07:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:07:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:07:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:07:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:07:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:07:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:07:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:07:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:07:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1786947556520692e-06 of space, bias 4.0, pg target 0.0014144337067824831 quantized to 16 (current 16)
Jan 31 05:07:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:07:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:07:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:07:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:07:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:07:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 05:07:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:07:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:07:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:07:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:07:54 np0005603787 python3.9[157037]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:07:55 np0005603787 python3.9[157190]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:07:55 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:07:56 np0005603787 python3.9[157343]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:07:56 np0005603787 python3.9[157495]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:07:57 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:07:57 np0005603787 python3.9[157647]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:07:57 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:07:57 np0005603787 python3.9[157799]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:07:58 np0005603787 python3.9[157951]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:07:58 np0005603787 python3.9[158103]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:07:59 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:07:59 np0005603787 python3.9[158255]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:08:00 np0005603787 python3.9[158407]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:08:00 np0005603787 python3.9[158559]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:08:01 np0005603787 python3.9[158711]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:08:01 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:08:01 np0005603787 python3.9[158863]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:08:02 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:08:03 np0005603787 python3.9[159015]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:08:03 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:08:03 np0005603787 python3.9[159167]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:08:04 np0005603787 python3.9[159319]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:08:04 np0005603787 python3.9[159471]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:08:05 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:08:05 np0005603787 podman[159597]: 2026-01-31 10:08:05.427166921 +0000 UTC m=+0.044908759 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 05:08:05 np0005603787 python3.9[159635]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 05:08:06 np0005603787 python3.9[159794]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 05:08:06 np0005603787 systemd[1]: Reloading.
Jan 31 05:08:06 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:08:06 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:08:07 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:08:07 np0005603787 podman[159952]: 2026-01-31 10:08:07.230270874 +0000 UTC m=+0.075400168 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20260127)
Jan 31 05:08:07 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:08:07 np0005603787 python3.9[160000]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:08:07 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:08:07 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:08:07 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:08:07 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:08:07 np0005603787 python3.9[160216]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:08:08 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:08:08 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:08:08 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:08:08 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:08:08 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:08:08 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:08:08 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:08:08 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:08:08 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:08:08 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:08:08 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:08:08 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:08:08 np0005603787 python3.9[160461]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:08:08 np0005603787 podman[160618]: 2026-01-31 10:08:08.800960573 +0000 UTC m=+0.040257907 container create 91a878a6f8974c68c77b62c9ac3fa85436cf1dfd526da6dd6dfa5a46444428c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_rosalind, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:08:08 np0005603787 systemd[1]: Started libpod-conmon-91a878a6f8974c68c77b62c9ac3fa85436cf1dfd526da6dd6dfa5a46444428c8.scope.
Jan 31 05:08:08 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:08:08 np0005603787 podman[160618]: 2026-01-31 10:08:08.875996609 +0000 UTC m=+0.115293993 container init 91a878a6f8974c68c77b62c9ac3fa85436cf1dfd526da6dd6dfa5a46444428c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_rosalind, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:08:08 np0005603787 podman[160618]: 2026-01-31 10:08:08.785659228 +0000 UTC m=+0.024956572 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:08:08 np0005603787 podman[160618]: 2026-01-31 10:08:08.884899995 +0000 UTC m=+0.124197329 container start 91a878a6f8974c68c77b62c9ac3fa85436cf1dfd526da6dd6dfa5a46444428c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_rosalind, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:08:08 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:08:08 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:08:08 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:08:08 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:08:08 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:08:08 np0005603787 podman[160618]: 2026-01-31 10:08:08.8937941 +0000 UTC m=+0.133091474 container attach 91a878a6f8974c68c77b62c9ac3fa85436cf1dfd526da6dd6dfa5a46444428c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_rosalind, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:08:08 np0005603787 amazing_rosalind[160668]: 167 167
Jan 31 05:08:08 np0005603787 systemd[1]: libpod-91a878a6f8974c68c77b62c9ac3fa85436cf1dfd526da6dd6dfa5a46444428c8.scope: Deactivated successfully.
Jan 31 05:08:08 np0005603787 podman[160618]: 2026-01-31 10:08:08.903681622 +0000 UTC m=+0.142978976 container died 91a878a6f8974c68c77b62c9ac3fa85436cf1dfd526da6dd6dfa5a46444428c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:08:08 np0005603787 systemd[1]: var-lib-containers-storage-overlay-f6c1d3f1d7714b1f82aa7b6fccee161b23645a56d7ab6d449ab272970773744c-merged.mount: Deactivated successfully.
Jan 31 05:08:08 np0005603787 podman[160618]: 2026-01-31 10:08:08.975247917 +0000 UTC m=+0.214545251 container remove 91a878a6f8974c68c77b62c9ac3fa85436cf1dfd526da6dd6dfa5a46444428c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:08:08 np0005603787 systemd[1]: libpod-conmon-91a878a6f8974c68c77b62c9ac3fa85436cf1dfd526da6dd6dfa5a46444428c8.scope: Deactivated successfully.
Jan 31 05:08:09 np0005603787 python3.9[160711]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:08:09 np0005603787 podman[160723]: 2026-01-31 10:08:09.132253603 +0000 UTC m=+0.038025178 container create 45fea67b7d5b34c0354eafffa4b66f3318409d5c783b2dc76a1ec0207217346b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_shamir, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:08:09 np0005603787 systemd[1]: Started libpod-conmon-45fea67b7d5b34c0354eafffa4b66f3318409d5c783b2dc76a1ec0207217346b.scope.
Jan 31 05:08:09 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:08:09 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d2e1dc30adeda835a70746bbcf32d2945ba90c2b7f5f52ffcd114d6c307dc97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:08:09 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d2e1dc30adeda835a70746bbcf32d2945ba90c2b7f5f52ffcd114d6c307dc97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:08:09 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d2e1dc30adeda835a70746bbcf32d2945ba90c2b7f5f52ffcd114d6c307dc97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:08:09 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d2e1dc30adeda835a70746bbcf32d2945ba90c2b7f5f52ffcd114d6c307dc97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:08:09 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d2e1dc30adeda835a70746bbcf32d2945ba90c2b7f5f52ffcd114d6c307dc97/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:08:09 np0005603787 podman[160723]: 2026-01-31 10:08:09.205243375 +0000 UTC m=+0.111015000 container init 45fea67b7d5b34c0354eafffa4b66f3318409d5c783b2dc76a1ec0207217346b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:08:09 np0005603787 podman[160723]: 2026-01-31 10:08:09.11401536 +0000 UTC m=+0.019786955 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:08:09 np0005603787 podman[160723]: 2026-01-31 10:08:09.211320487 +0000 UTC m=+0.117092062 container start 45fea67b7d5b34c0354eafffa4b66f3318409d5c783b2dc76a1ec0207217346b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_shamir, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 31 05:08:09 np0005603787 podman[160723]: 2026-01-31 10:08:09.220566121 +0000 UTC m=+0.126337696 container attach 45fea67b7d5b34c0354eafffa4b66f3318409d5c783b2dc76a1ec0207217346b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_shamir, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:08:09 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:08:09 np0005603787 python3.9[160898]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:08:09 np0005603787 keen_shamir[160761]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:08:09 np0005603787 keen_shamir[160761]: --> All data devices are unavailable
Jan 31 05:08:09 np0005603787 systemd[1]: libpod-45fea67b7d5b34c0354eafffa4b66f3318409d5c783b2dc76a1ec0207217346b.scope: Deactivated successfully.
Jan 31 05:08:09 np0005603787 podman[160723]: 2026-01-31 10:08:09.700950718 +0000 UTC m=+0.606722303 container died 45fea67b7d5b34c0354eafffa4b66f3318409d5c783b2dc76a1ec0207217346b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_shamir, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:08:09 np0005603787 systemd[1]: var-lib-containers-storage-overlay-6d2e1dc30adeda835a70746bbcf32d2945ba90c2b7f5f52ffcd114d6c307dc97-merged.mount: Deactivated successfully.
Jan 31 05:08:09 np0005603787 podman[160723]: 2026-01-31 10:08:09.750860549 +0000 UTC m=+0.656632114 container remove 45fea67b7d5b34c0354eafffa4b66f3318409d5c783b2dc76a1ec0207217346b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:08:09 np0005603787 systemd[1]: libpod-conmon-45fea67b7d5b34c0354eafffa4b66f3318409d5c783b2dc76a1ec0207217346b.scope: Deactivated successfully.
Jan 31 05:08:10 np0005603787 podman[161138]: 2026-01-31 10:08:10.160645617 +0000 UTC m=+0.063200494 container create ea25bed7ecc3b7d015161bff0b30cc10cff7edcdf506ae41fedc2a5e1ab0d9c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_haibt, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 05:08:10 np0005603787 python3.9[161125]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:08:10 np0005603787 systemd[1]: Started libpod-conmon-ea25bed7ecc3b7d015161bff0b30cc10cff7edcdf506ae41fedc2a5e1ab0d9c7.scope.
Jan 31 05:08:10 np0005603787 podman[161138]: 2026-01-31 10:08:10.128860695 +0000 UTC m=+0.031415632 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:08:10 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:08:10 np0005603787 podman[161138]: 2026-01-31 10:08:10.24006853 +0000 UTC m=+0.142623387 container init ea25bed7ecc3b7d015161bff0b30cc10cff7edcdf506ae41fedc2a5e1ab0d9c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_haibt, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 05:08:10 np0005603787 podman[161138]: 2026-01-31 10:08:10.24462832 +0000 UTC m=+0.147183187 container start ea25bed7ecc3b7d015161bff0b30cc10cff7edcdf506ae41fedc2a5e1ab0d9c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_haibt, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Jan 31 05:08:10 np0005603787 hopeful_haibt[161155]: 167 167
Jan 31 05:08:10 np0005603787 systemd[1]: libpod-ea25bed7ecc3b7d015161bff0b30cc10cff7edcdf506ae41fedc2a5e1ab0d9c7.scope: Deactivated successfully.
Jan 31 05:08:10 np0005603787 podman[161138]: 2026-01-31 10:08:10.249915721 +0000 UTC m=+0.152470558 container attach ea25bed7ecc3b7d015161bff0b30cc10cff7edcdf506ae41fedc2a5e1ab0d9c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:08:10 np0005603787 podman[161138]: 2026-01-31 10:08:10.251453401 +0000 UTC m=+0.154008268 container died ea25bed7ecc3b7d015161bff0b30cc10cff7edcdf506ae41fedc2a5e1ab0d9c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_haibt, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 05:08:10 np0005603787 systemd[1]: var-lib-containers-storage-overlay-46cfb738f5779050f43954d0c3bedfd51ff25d20c114a4d531ac74bccc9a4bea-merged.mount: Deactivated successfully.
Jan 31 05:08:10 np0005603787 podman[161138]: 2026-01-31 10:08:10.291365337 +0000 UTC m=+0.193920174 container remove ea25bed7ecc3b7d015161bff0b30cc10cff7edcdf506ae41fedc2a5e1ab0d9c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_haibt, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 05:08:10 np0005603787 systemd[1]: libpod-conmon-ea25bed7ecc3b7d015161bff0b30cc10cff7edcdf506ae41fedc2a5e1ab0d9c7.scope: Deactivated successfully.
Jan 31 05:08:10 np0005603787 podman[161254]: 2026-01-31 10:08:10.454440535 +0000 UTC m=+0.042263250 container create f7d1d9670f42b40bc06a573c3c6d7b08ea9569044cc7112de48bee3abab934aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_golick, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 05:08:10 np0005603787 systemd[1]: Started libpod-conmon-f7d1d9670f42b40bc06a573c3c6d7b08ea9569044cc7112de48bee3abab934aa.scope.
Jan 31 05:08:10 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:08:10 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a37561df895d5a336a3fad8587f9dfd3fbfc17c1042529c698bbd96d2cc1217b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:08:10 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a37561df895d5a336a3fad8587f9dfd3fbfc17c1042529c698bbd96d2cc1217b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:08:10 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a37561df895d5a336a3fad8587f9dfd3fbfc17c1042529c698bbd96d2cc1217b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:08:10 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a37561df895d5a336a3fad8587f9dfd3fbfc17c1042529c698bbd96d2cc1217b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:08:10 np0005603787 podman[161254]: 2026-01-31 10:08:10.433601153 +0000 UTC m=+0.021423888 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:08:10 np0005603787 podman[161254]: 2026-01-31 10:08:10.538946071 +0000 UTC m=+0.126768826 container init f7d1d9670f42b40bc06a573c3c6d7b08ea9569044cc7112de48bee3abab934aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:08:10 np0005603787 podman[161254]: 2026-01-31 10:08:10.544904779 +0000 UTC m=+0.132727484 container start f7d1d9670f42b40bc06a573c3c6d7b08ea9569044cc7112de48bee3abab934aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:08:10 np0005603787 podman[161254]: 2026-01-31 10:08:10.54871882 +0000 UTC m=+0.136541575 container attach f7d1d9670f42b40bc06a573c3c6d7b08ea9569044cc7112de48bee3abab934aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_golick, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:08:10 np0005603787 python3.9[161350]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:08:10 np0005603787 jovial_golick[161302]: {
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:    "0": [
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:        {
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:            "devices": [
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "/dev/loop3"
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:            ],
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:            "lv_name": "ceph_lv0",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:            "lv_size": "21470642176",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:            "name": "ceph_lv0",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:            "tags": {
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "ceph.cluster_name": "ceph",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "ceph.crush_device_class": "",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "ceph.encrypted": "0",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "ceph.objectstore": "bluestore",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "ceph.osd_id": "0",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "ceph.type": "block",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "ceph.vdo": "0",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "ceph.with_tpm": "0"
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:            },
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:            "type": "block",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:            "vg_name": "ceph_vg0"
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:        }
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:    ],
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:    "1": [
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:        {
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:            "devices": [
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "/dev/loop4"
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:            ],
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:            "lv_name": "ceph_lv1",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:            "lv_size": "21470642176",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:            "name": "ceph_lv1",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:            "tags": {
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "ceph.cluster_name": "ceph",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "ceph.crush_device_class": "",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "ceph.encrypted": "0",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "ceph.objectstore": "bluestore",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "ceph.osd_id": "1",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "ceph.type": "block",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "ceph.vdo": "0",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "ceph.with_tpm": "0"
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:            },
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:            "type": "block",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:            "vg_name": "ceph_vg1"
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:        }
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:    ],
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:    "2": [
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:        {
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:            "devices": [
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "/dev/loop5"
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:            ],
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:            "lv_name": "ceph_lv2",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:            "lv_size": "21470642176",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:            "name": "ceph_lv2",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:            "tags": {
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "ceph.cluster_name": "ceph",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "ceph.crush_device_class": "",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "ceph.encrypted": "0",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "ceph.objectstore": "bluestore",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "ceph.osd_id": "2",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "ceph.type": "block",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "ceph.vdo": "0",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:                "ceph.with_tpm": "0"
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:            },
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:            "type": "block",
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:            "vg_name": "ceph_vg2"
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:        }
Jan 31 05:08:10 np0005603787 jovial_golick[161302]:    ]
Jan 31 05:08:10 np0005603787 jovial_golick[161302]: }
Jan 31 05:08:10 np0005603787 systemd[1]: libpod-f7d1d9670f42b40bc06a573c3c6d7b08ea9569044cc7112de48bee3abab934aa.scope: Deactivated successfully.
Jan 31 05:08:10 np0005603787 podman[161254]: 2026-01-31 10:08:10.837223577 +0000 UTC m=+0.425046332 container died f7d1d9670f42b40bc06a573c3c6d7b08ea9569044cc7112de48bee3abab934aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_golick, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 05:08:10 np0005603787 systemd[1]: var-lib-containers-storage-overlay-a37561df895d5a336a3fad8587f9dfd3fbfc17c1042529c698bbd96d2cc1217b-merged.mount: Deactivated successfully.
Jan 31 05:08:10 np0005603787 podman[161254]: 2026-01-31 10:08:10.879969969 +0000 UTC m=+0.467792674 container remove f7d1d9670f42b40bc06a573c3c6d7b08ea9569044cc7112de48bee3abab934aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:08:10 np0005603787 systemd[1]: libpod-conmon-f7d1d9670f42b40bc06a573c3c6d7b08ea9569044cc7112de48bee3abab934aa.scope: Deactivated successfully.
Jan 31 05:08:11 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:08:11 np0005603787 podman[161508]: 2026-01-31 10:08:11.312049807 +0000 UTC m=+0.039624820 container create 033d2107b5e8ec3d4b0c45813dbe3ec54782ad5bb3f250baa6fe2241dd2327db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_morse, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:08:11 np0005603787 systemd[1]: Started libpod-conmon-033d2107b5e8ec3d4b0c45813dbe3ec54782ad5bb3f250baa6fe2241dd2327db.scope.
Jan 31 05:08:11 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:08:11 np0005603787 podman[161508]: 2026-01-31 10:08:11.383949621 +0000 UTC m=+0.111524714 container init 033d2107b5e8ec3d4b0c45813dbe3ec54782ad5bb3f250baa6fe2241dd2327db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_morse, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:08:11 np0005603787 podman[161508]: 2026-01-31 10:08:11.292787027 +0000 UTC m=+0.020362130 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:08:11 np0005603787 podman[161508]: 2026-01-31 10:08:11.390860373 +0000 UTC m=+0.118435386 container start 033d2107b5e8ec3d4b0c45813dbe3ec54782ad5bb3f250baa6fe2241dd2327db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_morse, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:08:11 np0005603787 podman[161508]: 2026-01-31 10:08:11.393920775 +0000 UTC m=+0.121495868 container attach 033d2107b5e8ec3d4b0c45813dbe3ec54782ad5bb3f250baa6fe2241dd2327db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_morse, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 05:08:11 np0005603787 busy_morse[161548]: 167 167
Jan 31 05:08:11 np0005603787 systemd[1]: libpod-033d2107b5e8ec3d4b0c45813dbe3ec54782ad5bb3f250baa6fe2241dd2327db.scope: Deactivated successfully.
Jan 31 05:08:11 np0005603787 podman[161508]: 2026-01-31 10:08:11.396680258 +0000 UTC m=+0.124255271 container died 033d2107b5e8ec3d4b0c45813dbe3ec54782ad5bb3f250baa6fe2241dd2327db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:08:11 np0005603787 systemd[1]: var-lib-containers-storage-overlay-18d868815515486a8a6345defda86a1ae23acc3422433f3d8dd652c78281594a-merged.mount: Deactivated successfully.
Jan 31 05:08:11 np0005603787 podman[161508]: 2026-01-31 10:08:11.433330848 +0000 UTC m=+0.160905861 container remove 033d2107b5e8ec3d4b0c45813dbe3ec54782ad5bb3f250baa6fe2241dd2327db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_morse, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 05:08:11 np0005603787 systemd[1]: libpod-conmon-033d2107b5e8ec3d4b0c45813dbe3ec54782ad5bb3f250baa6fe2241dd2327db.scope: Deactivated successfully.
Jan 31 05:08:11 np0005603787 podman[161625]: 2026-01-31 10:08:11.57431485 +0000 UTC m=+0.042741533 container create df5e52bc9166096d73b324c02080ac9f141f8cbe671a94b3575875580d5828f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_wu, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 05:08:11 np0005603787 systemd[1]: Started libpod-conmon-df5e52bc9166096d73b324c02080ac9f141f8cbe671a94b3575875580d5828f7.scope.
Jan 31 05:08:11 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:08:11 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54ae33816f8d3a7434456ff4b4818c0eeab55685a556aae64a8e9ccc5b7fe368/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:08:11 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54ae33816f8d3a7434456ff4b4818c0eeab55685a556aae64a8e9ccc5b7fe368/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:08:11 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54ae33816f8d3a7434456ff4b4818c0eeab55685a556aae64a8e9ccc5b7fe368/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:08:11 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54ae33816f8d3a7434456ff4b4818c0eeab55685a556aae64a8e9ccc5b7fe368/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:08:11 np0005603787 podman[161625]: 2026-01-31 10:08:11.644754065 +0000 UTC m=+0.113180748 container init df5e52bc9166096d73b324c02080ac9f141f8cbe671a94b3575875580d5828f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_wu, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:08:11 np0005603787 podman[161625]: 2026-01-31 10:08:11.556822907 +0000 UTC m=+0.025249610 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:08:11 np0005603787 podman[161625]: 2026-01-31 10:08:11.653496296 +0000 UTC m=+0.121922979 container start df5e52bc9166096d73b324c02080ac9f141f8cbe671a94b3575875580d5828f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_wu, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:08:11 np0005603787 podman[161625]: 2026-01-31 10:08:11.657171174 +0000 UTC m=+0.125597867 container attach df5e52bc9166096d73b324c02080ac9f141f8cbe671a94b3575875580d5828f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 05:08:11 np0005603787 python3.9[161619]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 31 05:08:12 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:08:12 np0005603787 lvm[161867]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:08:12 np0005603787 lvm[161867]: VG ceph_vg1 finished
Jan 31 05:08:12 np0005603787 lvm[161860]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:08:12 np0005603787 lvm[161860]: VG ceph_vg0 finished
Jan 31 05:08:12 np0005603787 lvm[161874]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:08:12 np0005603787 lvm[161874]: VG ceph_vg2 finished
Jan 31 05:08:12 np0005603787 magical_wu[161640]: {}
Jan 31 05:08:12 np0005603787 python3.9[161872]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 05:08:12 np0005603787 systemd[1]: libpod-df5e52bc9166096d73b324c02080ac9f141f8cbe671a94b3575875580d5828f7.scope: Deactivated successfully.
Jan 31 05:08:12 np0005603787 podman[161625]: 2026-01-31 10:08:12.451227444 +0000 UTC m=+0.919654137 container died df5e52bc9166096d73b324c02080ac9f141f8cbe671a94b3575875580d5828f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 31 05:08:12 np0005603787 systemd[1]: var-lib-containers-storage-overlay-54ae33816f8d3a7434456ff4b4818c0eeab55685a556aae64a8e9ccc5b7fe368-merged.mount: Deactivated successfully.
Jan 31 05:08:12 np0005603787 podman[161625]: 2026-01-31 10:08:12.49224071 +0000 UTC m=+0.960667393 container remove df5e52bc9166096d73b324c02080ac9f141f8cbe671a94b3575875580d5828f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_wu, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:08:12 np0005603787 systemd[1]: libpod-conmon-df5e52bc9166096d73b324c02080ac9f141f8cbe671a94b3575875580d5828f7.scope: Deactivated successfully.
Jan 31 05:08:12 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:08:12 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:08:12 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:08:12 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:08:12 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:08:12 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:08:13 np0005603787 python3.9[162070]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 05:08:13 np0005603787 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 05:08:13 np0005603787 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 05:08:13 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:08:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:08:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:08:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:08:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:08:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:08:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:08:14 np0005603787 python3.9[162231]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 05:08:15 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:08:15 np0005603787 python3.9[162315]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 05:08:17 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:08:17 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:08:19 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:08:21 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:08:22 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:08:23 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:08:24 np0005603787 ceph-osd[85879]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 05:08:24 np0005603787 ceph-osd[85879]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5619 writes, 24K keys, 5619 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5619 writes, 891 syncs, 6.31 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5619 writes, 24K keys, 5619 commit groups, 1.0 writes per commit group, ingest: 18.78 MB, 0.03 MB/s#012Interval WAL: 5619 writes, 891 syncs, 6.31 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55627ce7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55627ce7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Jan 31 05:08:25 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:08:27 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:08:27 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:08:29 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:08:30 np0005603787 ceph-osd[86934]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 05:08:30 np0005603787 ceph-osd[86934]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 602.2 total, 600.0 interval#012Cumulative writes: 6897 writes, 28K keys, 6897 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6897 writes, 1298 syncs, 5.31 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6897 writes, 28K keys, 6897 commit groups, 1.0 writes per commit group, ingest: 19.85 MB, 0.03 MB/s#012Interval WAL: 6897 writes, 1298 syncs, 5.31 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 602.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558daf9bba30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 602.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558daf9bba30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 602.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Jan 31 05:08:31 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:08:32 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:08:33 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:08:35 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:08:35 np0005603787 podman[162507]: 2026-01-31 10:08:35.852855566 +0000 UTC m=+0.067206900 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 05:08:37 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:08:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:08:37.047 154765 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:08:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:08:37.049 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:08:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:08:37.049 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:08:37 np0005603787 ceph-osd[87996]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 05:08:37 np0005603787 ceph-osd[87996]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.3 total, 600.0 interval#012Cumulative writes: 5425 writes, 23K keys, 5425 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5425 writes, 783 syncs, 6.93 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5425 writes, 23K keys, 5425 commit groups, 1.0 writes per commit group, ingest: 18.52 MB, 0.03 MB/s#012Interval WAL: 5425 writes, 783 syncs, 6.93 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c8dcfd98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c8dcfd98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Jan 31 05:08:37 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:08:37 np0005603787 podman[162527]: 2026-01-31 10:08:37.893265829 +0000 UTC m=+0.115756305 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 05:08:39 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:08:41 np0005603787 ceph-mgr[75453]: [devicehealth INFO root] Check health
Jan 31 05:08:41 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:08:42 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:08:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_10:08:43
Jan 31 05:08:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:08:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] do_upmap
Jan 31 05:08:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'default.rgw.control', 'vms', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'images', 'volumes', '.rgw.root']
Jan 31 05:08:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:08:43 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:08:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:08:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:08:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:08:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:08:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:08:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:08:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:08:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:08:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:08:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:08:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:08:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:08:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:08:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:08:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:08:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:08:45 np0005603787 kernel: SELinux:  Converting 2777 SID table entries...
Jan 31 05:08:45 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:08:45 np0005603787 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 05:08:45 np0005603787 kernel: SELinux:  policy capability open_perms=1
Jan 31 05:08:45 np0005603787 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 05:08:45 np0005603787 kernel: SELinux:  policy capability always_check_network=0
Jan 31 05:08:45 np0005603787 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 05:08:45 np0005603787 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 05:08:45 np0005603787 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 05:08:47 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:08:47 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:08:49 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:08:51 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:08:52 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:08:53 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:08:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:08:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:08:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:08:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:08:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:08:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:08:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:08:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:08:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:08:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:08:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:08:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:08:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1786947556520692e-06 of space, bias 4.0, pg target 0.0014144337067824831 quantized to 16 (current 16)
Jan 31 05:08:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:08:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:08:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:08:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:08:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:08:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 05:08:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:08:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:08:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:08:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:08:54 np0005603787 kernel: SELinux:  Converting 2777 SID table entries...
Jan 31 05:08:54 np0005603787 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 05:08:54 np0005603787 kernel: SELinux:  policy capability open_perms=1
Jan 31 05:08:54 np0005603787 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 05:08:54 np0005603787 kernel: SELinux:  policy capability always_check_network=0
Jan 31 05:08:54 np0005603787 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 05:08:54 np0005603787 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 05:08:54 np0005603787 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 05:08:55 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:08:57 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:08:57 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:08:59 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:09:01 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:09:02 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:09:03 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:09:05 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:09:06 np0005603787 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 31 05:09:06 np0005603787 podman[162568]: 2026-01-31 10:09:06.847450505 +0000 UTC m=+0.055493071 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:09:07 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:09:07 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:09:08 np0005603787 podman[164187]: 2026-01-31 10:09:08.872298276 +0000 UTC m=+0.086833172 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 05:09:09 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:09:11 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:09:12 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:09:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 31 05:09:13 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 05:09:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:09:13 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:09:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:09:13 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:09:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:09:13 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:09:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:09:13 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:09:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:09:13 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:09:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:09:13 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:09:13 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:09:13 np0005603787 podman[168957]: 2026-01-31 10:09:13.558678898 +0000 UTC m=+0.050940540 container create 6dd8c652cb394c099ae346daad3c83538f96f21dbe6c9d5bfed10563e4726fae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_satoshi, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 05:09:13 np0005603787 systemd[1]: Started libpod-conmon-6dd8c652cb394c099ae346daad3c83538f96f21dbe6c9d5bfed10563e4726fae.scope.
Jan 31 05:09:13 np0005603787 podman[168957]: 2026-01-31 10:09:13.527817368 +0000 UTC m=+0.020079030 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:09:13 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:09:13 np0005603787 podman[168957]: 2026-01-31 10:09:13.799309659 +0000 UTC m=+0.291571321 container init 6dd8c652cb394c099ae346daad3c83538f96f21dbe6c9d5bfed10563e4726fae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:09:13 np0005603787 podman[168957]: 2026-01-31 10:09:13.805986078 +0000 UTC m=+0.298247750 container start 6dd8c652cb394c099ae346daad3c83538f96f21dbe6c9d5bfed10563e4726fae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_satoshi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 05:09:13 np0005603787 musing_satoshi[169079]: 167 167
Jan 31 05:09:13 np0005603787 systemd[1]: libpod-6dd8c652cb394c099ae346daad3c83538f96f21dbe6c9d5bfed10563e4726fae.scope: Deactivated successfully.
Jan 31 05:09:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:09:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:09:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:09:13 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 05:09:13 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:09:13 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:09:13 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:09:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:09:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:09:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:09:13 np0005603787 podman[168957]: 2026-01-31 10:09:13.863486002 +0000 UTC m=+0.355747684 container attach 6dd8c652cb394c099ae346daad3c83538f96f21dbe6c9d5bfed10563e4726fae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_satoshi, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 05:09:13 np0005603787 podman[168957]: 2026-01-31 10:09:13.864546571 +0000 UTC m=+0.356808263 container died 6dd8c652cb394c099ae346daad3c83538f96f21dbe6c9d5bfed10563e4726fae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 05:09:13 np0005603787 systemd[1]: var-lib-containers-storage-overlay-f7463afa6e2fd68ef8b157675b0de771e242aa3e7fda4d9e982b20fd039463b6-merged.mount: Deactivated successfully.
Jan 31 05:09:14 np0005603787 podman[168957]: 2026-01-31 10:09:14.155517734 +0000 UTC m=+0.647779366 container remove 6dd8c652cb394c099ae346daad3c83538f96f21dbe6c9d5bfed10563e4726fae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle)
Jan 31 05:09:14 np0005603787 systemd[1]: libpod-conmon-6dd8c652cb394c099ae346daad3c83538f96f21dbe6c9d5bfed10563e4726fae.scope: Deactivated successfully.
Jan 31 05:09:14 np0005603787 podman[169737]: 2026-01-31 10:09:14.274951031 +0000 UTC m=+0.041061143 container create ad3595f1bfbb2364efbc5bd20444ce28c1a83c333762c42fee42372c77e84876 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_feynman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:09:14 np0005603787 systemd[1]: Started libpod-conmon-ad3595f1bfbb2364efbc5bd20444ce28c1a83c333762c42fee42372c77e84876.scope.
Jan 31 05:09:14 np0005603787 podman[169737]: 2026-01-31 10:09:14.251961774 +0000 UTC m=+0.018071906 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:09:14 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:09:14 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f1fa68532a96cf126e9abe6057b705a782c493929d55cff26a7c8e8b5e50c70/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:09:14 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f1fa68532a96cf126e9abe6057b705a782c493929d55cff26a7c8e8b5e50c70/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:09:14 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f1fa68532a96cf126e9abe6057b705a782c493929d55cff26a7c8e8b5e50c70/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:09:14 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f1fa68532a96cf126e9abe6057b705a782c493929d55cff26a7c8e8b5e50c70/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:09:14 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f1fa68532a96cf126e9abe6057b705a782c493929d55cff26a7c8e8b5e50c70/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:09:14 np0005603787 podman[169737]: 2026-01-31 10:09:14.370406664 +0000 UTC m=+0.136516806 container init ad3595f1bfbb2364efbc5bd20444ce28c1a83c333762c42fee42372c77e84876 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_feynman, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 05:09:14 np0005603787 podman[169737]: 2026-01-31 10:09:14.375666315 +0000 UTC m=+0.141776427 container start ad3595f1bfbb2364efbc5bd20444ce28c1a83c333762c42fee42372c77e84876 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 05:09:14 np0005603787 podman[169737]: 2026-01-31 10:09:14.381588494 +0000 UTC m=+0.147698636 container attach ad3595f1bfbb2364efbc5bd20444ce28c1a83c333762c42fee42372c77e84876 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_feynman, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 05:09:14 np0005603787 hardcore_feynman[169862]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:09:14 np0005603787 hardcore_feynman[169862]: --> All data devices are unavailable
Jan 31 05:09:14 np0005603787 systemd[1]: libpod-ad3595f1bfbb2364efbc5bd20444ce28c1a83c333762c42fee42372c77e84876.scope: Deactivated successfully.
Jan 31 05:09:14 np0005603787 podman[169737]: 2026-01-31 10:09:14.779190581 +0000 UTC m=+0.545300703 container died ad3595f1bfbb2364efbc5bd20444ce28c1a83c333762c42fee42372c77e84876 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_feynman, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True)
Jan 31 05:09:14 np0005603787 systemd[1]: var-lib-containers-storage-overlay-6f1fa68532a96cf126e9abe6057b705a782c493929d55cff26a7c8e8b5e50c70-merged.mount: Deactivated successfully.
Jan 31 05:09:14 np0005603787 podman[169737]: 2026-01-31 10:09:14.853184368 +0000 UTC m=+0.619294480 container remove ad3595f1bfbb2364efbc5bd20444ce28c1a83c333762c42fee42372c77e84876 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:09:14 np0005603787 systemd[1]: libpod-conmon-ad3595f1bfbb2364efbc5bd20444ce28c1a83c333762c42fee42372c77e84876.scope: Deactivated successfully.
Jan 31 05:09:15 np0005603787 podman[170890]: 2026-01-31 10:09:15.24092133 +0000 UTC m=+0.038870215 container create bc7abd9d5e3ccb27ab4ec824d27d7002126aa13b5579dcf27e6254e85b92d444 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_satoshi, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 05:09:15 np0005603787 systemd[1]: Started libpod-conmon-bc7abd9d5e3ccb27ab4ec824d27d7002126aa13b5579dcf27e6254e85b92d444.scope.
Jan 31 05:09:15 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:09:15 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:09:15 np0005603787 podman[170890]: 2026-01-31 10:09:15.317245119 +0000 UTC m=+0.115194034 container init bc7abd9d5e3ccb27ab4ec824d27d7002126aa13b5579dcf27e6254e85b92d444 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_satoshi, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:09:15 np0005603787 podman[170890]: 2026-01-31 10:09:15.222845254 +0000 UTC m=+0.020794179 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:09:15 np0005603787 podman[170890]: 2026-01-31 10:09:15.323442186 +0000 UTC m=+0.121391071 container start bc7abd9d5e3ccb27ab4ec824d27d7002126aa13b5579dcf27e6254e85b92d444 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_satoshi, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:09:15 np0005603787 clever_satoshi[170986]: 167 167
Jan 31 05:09:15 np0005603787 systemd[1]: libpod-bc7abd9d5e3ccb27ab4ec824d27d7002126aa13b5579dcf27e6254e85b92d444.scope: Deactivated successfully.
Jan 31 05:09:15 np0005603787 podman[170890]: 2026-01-31 10:09:15.329135368 +0000 UTC m=+0.127084273 container attach bc7abd9d5e3ccb27ab4ec824d27d7002126aa13b5579dcf27e6254e85b92d444 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_satoshi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:09:15 np0005603787 podman[170890]: 2026-01-31 10:09:15.329543459 +0000 UTC m=+0.127492354 container died bc7abd9d5e3ccb27ab4ec824d27d7002126aa13b5579dcf27e6254e85b92d444 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_satoshi, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:09:15 np0005603787 systemd[1]: var-lib-containers-storage-overlay-8d2d06376ed63506374a11da76fa5cb1cfd2aa0c378ffaa15396b2f637419163-merged.mount: Deactivated successfully.
Jan 31 05:09:15 np0005603787 podman[170890]: 2026-01-31 10:09:15.375257197 +0000 UTC m=+0.173206082 container remove bc7abd9d5e3ccb27ab4ec824d27d7002126aa13b5579dcf27e6254e85b92d444 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_satoshi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:09:15 np0005603787 systemd[1]: libpod-conmon-bc7abd9d5e3ccb27ab4ec824d27d7002126aa13b5579dcf27e6254e85b92d444.scope: Deactivated successfully.
Jan 31 05:09:15 np0005603787 podman[171182]: 2026-01-31 10:09:15.491886308 +0000 UTC m=+0.039490911 container create 9b8105968c1c58f40ef156edfc7834ad0634e5d06fdad7bdb9241499c12d7b09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_brown, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:09:15 np0005603787 systemd[1]: Started libpod-conmon-9b8105968c1c58f40ef156edfc7834ad0634e5d06fdad7bdb9241499c12d7b09.scope.
Jan 31 05:09:15 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:09:15 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/633f31966aaf1a7bf90704836866b11369383454456df2cc41ef26c9142c6293/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:09:15 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/633f31966aaf1a7bf90704836866b11369383454456df2cc41ef26c9142c6293/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:09:15 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/633f31966aaf1a7bf90704836866b11369383454456df2cc41ef26c9142c6293/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:09:15 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/633f31966aaf1a7bf90704836866b11369383454456df2cc41ef26c9142c6293/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:09:15 np0005603787 podman[171182]: 2026-01-31 10:09:15.471259534 +0000 UTC m=+0.018864157 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:09:15 np0005603787 podman[171182]: 2026-01-31 10:09:15.581281299 +0000 UTC m=+0.128885922 container init 9b8105968c1c58f40ef156edfc7834ad0634e5d06fdad7bdb9241499c12d7b09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_brown, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:09:15 np0005603787 podman[171182]: 2026-01-31 10:09:15.586410107 +0000 UTC m=+0.134014710 container start 9b8105968c1c58f40ef156edfc7834ad0634e5d06fdad7bdb9241499c12d7b09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_brown, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 05:09:15 np0005603787 podman[171182]: 2026-01-31 10:09:15.600518536 +0000 UTC m=+0.148123139 container attach 9b8105968c1c58f40ef156edfc7834ad0634e5d06fdad7bdb9241499c12d7b09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_brown, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 05:09:15 np0005603787 lucid_brown[171300]: {
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:    "0": [
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:        {
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:            "devices": [
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "/dev/loop3"
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:            ],
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:            "lv_name": "ceph_lv0",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:            "lv_size": "21470642176",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:            "name": "ceph_lv0",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:            "tags": {
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "ceph.cluster_name": "ceph",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "ceph.crush_device_class": "",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "ceph.encrypted": "0",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "ceph.objectstore": "bluestore",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "ceph.osd_id": "0",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "ceph.type": "block",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "ceph.vdo": "0",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "ceph.with_tpm": "0"
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:            },
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:            "type": "block",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:            "vg_name": "ceph_vg0"
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:        }
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:    ],
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:    "1": [
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:        {
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:            "devices": [
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "/dev/loop4"
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:            ],
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:            "lv_name": "ceph_lv1",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:            "lv_size": "21470642176",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:            "name": "ceph_lv1",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:            "tags": {
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "ceph.cluster_name": "ceph",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "ceph.crush_device_class": "",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "ceph.encrypted": "0",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "ceph.objectstore": "bluestore",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "ceph.osd_id": "1",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "ceph.type": "block",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "ceph.vdo": "0",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "ceph.with_tpm": "0"
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:            },
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:            "type": "block",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:            "vg_name": "ceph_vg1"
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:        }
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:    ],
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:    "2": [
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:        {
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:            "devices": [
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "/dev/loop5"
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:            ],
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:            "lv_name": "ceph_lv2",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:            "lv_size": "21470642176",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:            "name": "ceph_lv2",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:            "tags": {
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "ceph.cluster_name": "ceph",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "ceph.crush_device_class": "",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "ceph.encrypted": "0",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "ceph.objectstore": "bluestore",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "ceph.osd_id": "2",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "ceph.type": "block",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "ceph.vdo": "0",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:                "ceph.with_tpm": "0"
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:            },
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:            "type": "block",
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:            "vg_name": "ceph_vg2"
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:        }
Jan 31 05:09:15 np0005603787 lucid_brown[171300]:    ]
Jan 31 05:09:15 np0005603787 lucid_brown[171300]: }
Jan 31 05:09:15 np0005603787 systemd[1]: libpod-9b8105968c1c58f40ef156edfc7834ad0634e5d06fdad7bdb9241499c12d7b09.scope: Deactivated successfully.
Jan 31 05:09:15 np0005603787 podman[171182]: 2026-01-31 10:09:15.865168572 +0000 UTC m=+0.412773175 container died 9b8105968c1c58f40ef156edfc7834ad0634e5d06fdad7bdb9241499c12d7b09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_brown, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:09:16 np0005603787 systemd[1]: var-lib-containers-storage-overlay-633f31966aaf1a7bf90704836866b11369383454456df2cc41ef26c9142c6293-merged.mount: Deactivated successfully.
Jan 31 05:09:16 np0005603787 podman[171182]: 2026-01-31 10:09:16.187402665 +0000 UTC m=+0.735007278 container remove 9b8105968c1c58f40ef156edfc7834ad0634e5d06fdad7bdb9241499c12d7b09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:09:16 np0005603787 systemd[1]: libpod-conmon-9b8105968c1c58f40ef156edfc7834ad0634e5d06fdad7bdb9241499c12d7b09.scope: Deactivated successfully.
Jan 31 05:09:16 np0005603787 podman[172337]: 2026-01-31 10:09:16.654192359 +0000 UTC m=+0.096902863 container create f2bfc75a8ff5aac002aa2985e53e114934f22268ab79bff5e1b67a59faef9f89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 05:09:16 np0005603787 podman[172337]: 2026-01-31 10:09:16.579959836 +0000 UTC m=+0.022670250 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:09:16 np0005603787 systemd[1]: Started libpod-conmon-f2bfc75a8ff5aac002aa2985e53e114934f22268ab79bff5e1b67a59faef9f89.scope.
Jan 31 05:09:16 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:09:16 np0005603787 podman[172337]: 2026-01-31 10:09:16.779954116 +0000 UTC m=+0.222664520 container init f2bfc75a8ff5aac002aa2985e53e114934f22268ab79bff5e1b67a59faef9f89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_babbage, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 05:09:16 np0005603787 podman[172337]: 2026-01-31 10:09:16.78420628 +0000 UTC m=+0.226916674 container start f2bfc75a8ff5aac002aa2985e53e114934f22268ab79bff5e1b67a59faef9f89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_babbage, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 05:09:16 np0005603787 compassionate_babbage[172534]: 167 167
Jan 31 05:09:16 np0005603787 systemd[1]: libpod-f2bfc75a8ff5aac002aa2985e53e114934f22268ab79bff5e1b67a59faef9f89.scope: Deactivated successfully.
Jan 31 05:09:16 np0005603787 podman[172337]: 2026-01-31 10:09:16.814020002 +0000 UTC m=+0.256730426 container attach f2bfc75a8ff5aac002aa2985e53e114934f22268ab79bff5e1b67a59faef9f89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_babbage, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:09:16 np0005603787 podman[172337]: 2026-01-31 10:09:16.814371791 +0000 UTC m=+0.257082195 container died f2bfc75a8ff5aac002aa2985e53e114934f22268ab79bff5e1b67a59faef9f89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_babbage, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 05:09:16 np0005603787 systemd[1]: var-lib-containers-storage-overlay-3c3b1668804cf9126482998a18e5002785c580b6d77c8db928e801ed634aa2db-merged.mount: Deactivated successfully.
Jan 31 05:09:17 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:09:17 np0005603787 podman[172337]: 2026-01-31 10:09:17.102504598 +0000 UTC m=+0.545214992 container remove f2bfc75a8ff5aac002aa2985e53e114934f22268ab79bff5e1b67a59faef9f89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_babbage, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:09:17 np0005603787 systemd[1]: libpod-conmon-f2bfc75a8ff5aac002aa2985e53e114934f22268ab79bff5e1b67a59faef9f89.scope: Deactivated successfully.
Jan 31 05:09:17 np0005603787 podman[172996]: 2026-01-31 10:09:17.263416038 +0000 UTC m=+0.064165803 container create 4dc9d3170ef12304352265bd7707db2c8aba96bc8728cb3c253242dd0d2fb33e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 05:09:17 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:09:17 np0005603787 systemd[1]: Started libpod-conmon-4dc9d3170ef12304352265bd7707db2c8aba96bc8728cb3c253242dd0d2fb33e.scope.
Jan 31 05:09:17 np0005603787 podman[172996]: 2026-01-31 10:09:17.222387127 +0000 UTC m=+0.023136892 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:09:17 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:09:17 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/707d07881afd0617f8d7058dc85d380b1026a6276b51167183c6c24ba9f8c8d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:09:17 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/707d07881afd0617f8d7058dc85d380b1026a6276b51167183c6c24ba9f8c8d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:09:17 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/707d07881afd0617f8d7058dc85d380b1026a6276b51167183c6c24ba9f8c8d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:09:17 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/707d07881afd0617f8d7058dc85d380b1026a6276b51167183c6c24ba9f8c8d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:09:17 np0005603787 podman[172996]: 2026-01-31 10:09:17.366712102 +0000 UTC m=+0.167461867 container init 4dc9d3170ef12304352265bd7707db2c8aba96bc8728cb3c253242dd0d2fb33e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_wescoff, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 05:09:17 np0005603787 podman[172996]: 2026-01-31 10:09:17.375880149 +0000 UTC m=+0.176629924 container start 4dc9d3170ef12304352265bd7707db2c8aba96bc8728cb3c253242dd0d2fb33e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_wescoff, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:09:17 np0005603787 podman[172996]: 2026-01-31 10:09:17.388480367 +0000 UTC m=+0.189230132 container attach 4dc9d3170ef12304352265bd7707db2c8aba96bc8728cb3c253242dd0d2fb33e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_wescoff, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 05:09:17 np0005603787 lvm[173775]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:09:17 np0005603787 lvm[173771]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:09:17 np0005603787 lvm[173775]: VG ceph_vg1 finished
Jan 31 05:09:17 np0005603787 lvm[173771]: VG ceph_vg0 finished
Jan 31 05:09:17 np0005603787 lvm[173800]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:09:17 np0005603787 lvm[173800]: VG ceph_vg2 finished
Jan 31 05:09:18 np0005603787 frosty_wescoff[173131]: {}
Jan 31 05:09:18 np0005603787 systemd[1]: libpod-4dc9d3170ef12304352265bd7707db2c8aba96bc8728cb3c253242dd0d2fb33e.scope: Deactivated successfully.
Jan 31 05:09:18 np0005603787 podman[172996]: 2026-01-31 10:09:18.086610823 +0000 UTC m=+0.887360588 container died 4dc9d3170ef12304352265bd7707db2c8aba96bc8728cb3c253242dd0d2fb33e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_wescoff, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 05:09:18 np0005603787 systemd[1]: var-lib-containers-storage-overlay-707d07881afd0617f8d7058dc85d380b1026a6276b51167183c6c24ba9f8c8d7-merged.mount: Deactivated successfully.
Jan 31 05:09:18 np0005603787 podman[172996]: 2026-01-31 10:09:18.137986823 +0000 UTC m=+0.938736588 container remove 4dc9d3170ef12304352265bd7707db2c8aba96bc8728cb3c253242dd0d2fb33e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:09:18 np0005603787 systemd[1]: libpod-conmon-4dc9d3170ef12304352265bd7707db2c8aba96bc8728cb3c253242dd0d2fb33e.scope: Deactivated successfully.
Jan 31 05:09:18 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:09:18 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:09:18 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:09:18 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:09:18 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:09:18 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:09:19 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:09:21 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:09:22 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:09:23 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:09:25 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:09:27 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:09:27 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:09:29 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s
Jan 31 05:09:31 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 19 op/s
Jan 31 05:09:32 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:09:33 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 41 op/s
Jan 31 05:09:35 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 58 op/s
Jan 31 05:09:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:09:37.048 154765 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:09:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:09:37.049 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:09:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:09:37.049 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:09:37 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 05:09:37 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:09:37 np0005603787 podman[180105]: 2026-01-31 10:09:37.841422498 +0000 UTC m=+0.056908309 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 05:09:38 np0005603787 kernel: SELinux:  Converting 2778 SID table entries...
Jan 31 05:09:38 np0005603787 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 05:09:38 np0005603787 kernel: SELinux:  policy capability open_perms=1
Jan 31 05:09:38 np0005603787 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 05:09:38 np0005603787 kernel: SELinux:  policy capability always_check_network=0
Jan 31 05:09:38 np0005603787 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 05:09:38 np0005603787 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 05:09:38 np0005603787 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 05:09:39 np0005603787 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 31 05:09:39 np0005603787 podman[180129]: 2026-01-31 10:09:39.133476122 +0000 UTC m=+0.098099375 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 05:09:39 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 05:09:40 np0005603787 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Jan 31 05:09:40 np0005603787 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Jan 31 05:09:41 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s
Jan 31 05:09:42 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:09:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_10:09:43
Jan 31 05:09:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:09:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] do_upmap
Jan 31 05:09:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'backups', '.mgr', 'default.rgw.meta', 'volumes']
Jan 31 05:09:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:09:43 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Jan 31 05:09:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:09:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:09:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:09:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:09:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:09:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:09:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:09:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:09:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:09:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:09:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:09:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:09:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:09:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:09:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:09:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:09:45 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Jan 31 05:09:47 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 0 B/s wr, 1 op/s
Jan 31 05:09:47 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:09:49 np0005603787 systemd[1]: Stopping OpenSSH server daemon...
Jan 31 05:09:49 np0005603787 systemd[1]: sshd.service: Deactivated successfully.
Jan 31 05:09:49 np0005603787 systemd[1]: Stopped OpenSSH server daemon.
Jan 31 05:09:49 np0005603787 systemd[1]: sshd.service: Consumed 1.837s CPU time, read 32.0K from disk, written 0B to disk.
Jan 31 05:09:49 np0005603787 systemd[1]: Stopped target sshd-keygen.target.
Jan 31 05:09:49 np0005603787 systemd[1]: Stopping sshd-keygen.target...
Jan 31 05:09:49 np0005603787 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 05:09:49 np0005603787 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 05:09:49 np0005603787 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 05:09:49 np0005603787 systemd[1]: Reached target sshd-keygen.target.
Jan 31 05:09:49 np0005603787 systemd[1]: Starting OpenSSH server daemon...
Jan 31 05:09:49 np0005603787 systemd[1]: Started OpenSSH server daemon.
Jan 31 05:09:49 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:09:50 np0005603787 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 05:09:50 np0005603787 systemd[1]: Starting man-db-cache-update.service...
Jan 31 05:09:50 np0005603787 systemd[1]: Reloading.
Jan 31 05:09:51 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:09:51 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:09:51 np0005603787 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 05:09:51 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:09:52 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:09:53 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:09:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:09:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:09:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:09:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:09:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:09:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:09:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:09:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:09:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:09:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:09:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:09:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:09:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1786947556520692e-06 of space, bias 4.0, pg target 0.0014144337067824831 quantized to 16 (current 16)
Jan 31 05:09:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:09:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:09:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:09:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:09:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:09:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 05:09:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:09:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:09:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:09:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:09:55 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:09:57 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:09:57 np0005603787 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 05:09:57 np0005603787 systemd[1]: Finished man-db-cache-update.service.
Jan 31 05:09:57 np0005603787 systemd[1]: man-db-cache-update.service: Consumed 7.083s CPU time.
Jan 31 05:09:57 np0005603787 systemd[1]: run-r3b2d37d584f24390bdc61ebcf48f73a6.service: Deactivated successfully.
Jan 31 05:09:57 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:09:57 np0005603787 python3.9[189807]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 05:09:57 np0005603787 systemd[1]: Reloading.
Jan 31 05:09:57 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:09:57 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:09:58 np0005603787 python3.9[189997]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 05:09:58 np0005603787 systemd[1]: Reloading.
Jan 31 05:09:59 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:09:59 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:09:59 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:09:59 np0005603787 python3.9[190188]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 05:09:59 np0005603787 systemd[1]: Reloading.
Jan 31 05:09:59 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:09:59 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:10:00 np0005603787 python3.9[190378]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 05:10:00 np0005603787 systemd[1]: Reloading.
Jan 31 05:10:01 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:10:01 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:10:01 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:10:01 np0005603787 python3.9[190568]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 05:10:01 np0005603787 systemd[1]: Reloading.
Jan 31 05:10:02 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:10:02 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:10:02 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:10:03 np0005603787 python3.9[190759]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 05:10:03 np0005603787 systemd[1]: Reloading.
Jan 31 05:10:03 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:10:03 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:10:03 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:10:04 np0005603787 python3.9[190950]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 05:10:04 np0005603787 systemd[1]: Reloading.
Jan 31 05:10:04 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:10:04 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:10:05 np0005603787 python3.9[191140]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 05:10:05 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:10:05 np0005603787 python3.9[191295]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 05:10:06 np0005603787 systemd[1]: Reloading.
Jan 31 05:10:06 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:10:06 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:10:07 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:10:07 np0005603787 python3.9[191485]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 05:10:07 np0005603787 systemd[1]: Reloading.
Jan 31 05:10:07 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:10:07 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:10:07 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:10:07 np0005603787 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 31 05:10:07 np0005603787 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 31 05:10:08 np0005603787 podman[191650]: 2026-01-31 10:10:08.195543802 +0000 UTC m=+0.055890566 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 31 05:10:08 np0005603787 python3.9[191695]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 05:10:09 np0005603787 python3.9[191852]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 05:10:09 np0005603787 podman[191854]: 2026-01-31 10:10:09.308514418 +0000 UTC m=+0.067343126 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 05:10:09 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:10:09 np0005603787 python3.9[192032]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 05:10:10 np0005603787 python3.9[192187]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 05:10:11 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:10:11 np0005603787 python3.9[192342]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 05:10:12 np0005603787 python3.9[192497]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 05:10:12 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:10:12 np0005603787 python3.9[192652]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 05:10:13 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:10:13 np0005603787 python3.9[192807]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 05:10:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:10:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:10:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:10:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:10:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:10:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:10:14 np0005603787 python3.9[192962]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 05:10:15 np0005603787 python3.9[193117]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 05:10:15 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:10:15 np0005603787 python3.9[193272]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 05:10:16 np0005603787 python3.9[193427]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 05:10:17 np0005603787 python3.9[193582]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 05:10:17 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:10:17 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:10:18 np0005603787 python3.9[193737]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 05:10:18 np0005603787 podman[193987]: 2026-01-31 10:10:18.753587131 +0000 UTC m=+0.053520448 container exec 1cb6a2ad0c52f65a03512fc45c5f9abf84541c639633c47899a99e7122aa7891 (image=quay.io/ceph/ceph:v20, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3)
Jan 31 05:10:18 np0005603787 python3.9[193958]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:10:18 np0005603787 podman[193987]: 2026-01-31 10:10:18.833405326 +0000 UTC m=+0.133338643 container exec_died 1cb6a2ad0c52f65a03512fc45c5f9abf84541c639633c47899a99e7122aa7891 (image=quay.io/ceph/ceph:v20, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:10:19 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:10:19 np0005603787 python3.9[194273]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:10:19 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:10:19 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:10:19 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:10:19 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:10:19 np0005603787 python3.9[194546]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:10:19 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:10:19 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:10:19 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:10:19 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:10:19 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:10:19 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:10:19 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:10:19 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:10:19 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:10:19 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:10:19 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:10:19 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:10:20 np0005603787 podman[194778]: 2026-01-31 10:10:20.340553097 +0000 UTC m=+0.080126463 container create 40a17bd8914e6c527abdb08fd50946374464010149fc27f9be935cc3a763d3a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_rosalind, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:10:20 np0005603787 podman[194778]: 2026-01-31 10:10:20.278007396 +0000 UTC m=+0.017580742 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:10:20 np0005603787 python3.9[194766]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:10:20 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:10:20 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:10:20 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:10:20 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:10:20 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:10:20 np0005603787 systemd[1]: Started libpod-conmon-40a17bd8914e6c527abdb08fd50946374464010149fc27f9be935cc3a763d3a2.scope.
Jan 31 05:10:20 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:10:20 np0005603787 podman[194778]: 2026-01-31 10:10:20.538254972 +0000 UTC m=+0.277828388 container init 40a17bd8914e6c527abdb08fd50946374464010149fc27f9be935cc3a763d3a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_rosalind, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:10:20 np0005603787 podman[194778]: 2026-01-31 10:10:20.545948388 +0000 UTC m=+0.285521744 container start 40a17bd8914e6c527abdb08fd50946374464010149fc27f9be935cc3a763d3a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_rosalind, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 05:10:20 np0005603787 systemd[1]: libpod-40a17bd8914e6c527abdb08fd50946374464010149fc27f9be935cc3a763d3a2.scope: Deactivated successfully.
Jan 31 05:10:20 np0005603787 gracious_rosalind[194818]: 167 167
Jan 31 05:10:20 np0005603787 conmon[194818]: conmon 40a17bd8914e6c527abd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-40a17bd8914e6c527abdb08fd50946374464010149fc27f9be935cc3a763d3a2.scope/container/memory.events
Jan 31 05:10:20 np0005603787 podman[194778]: 2026-01-31 10:10:20.595308809 +0000 UTC m=+0.334882165 container attach 40a17bd8914e6c527abdb08fd50946374464010149fc27f9be935cc3a763d3a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:10:20 np0005603787 podman[194778]: 2026-01-31 10:10:20.595730951 +0000 UTC m=+0.335304317 container died 40a17bd8914e6c527abdb08fd50946374464010149fc27f9be935cc3a763d3a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_rosalind, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 05:10:20 np0005603787 systemd[1]: var-lib-containers-storage-overlay-f52116a6b5c5ebf84a38cb8843cf161a78cf9800fbdcf8d1143d9ad262f02103-merged.mount: Deactivated successfully.
Jan 31 05:10:20 np0005603787 podman[194778]: 2026-01-31 10:10:20.691240064 +0000 UTC m=+0.430813430 container remove 40a17bd8914e6c527abdb08fd50946374464010149fc27f9be935cc3a763d3a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 05:10:20 np0005603787 systemd[1]: libpod-conmon-40a17bd8914e6c527abdb08fd50946374464010149fc27f9be935cc3a763d3a2.scope: Deactivated successfully.
Jan 31 05:10:20 np0005603787 podman[194972]: 2026-01-31 10:10:20.848158377 +0000 UTC m=+0.041428440 container create 2ff98edd522be15f5cdd034df360e8ac651d0b91bc867e0d60edaef585ee5bee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_fermi, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 05:10:20 np0005603787 systemd[1]: Started libpod-conmon-2ff98edd522be15f5cdd034df360e8ac651d0b91bc867e0d60edaef585ee5bee.scope.
Jan 31 05:10:20 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:10:20 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2526af09cd7312c0dcf1d51ed4cf890fd6e51add87b4fa471e8469755dda4a54/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:10:20 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2526af09cd7312c0dcf1d51ed4cf890fd6e51add87b4fa471e8469755dda4a54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:10:20 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2526af09cd7312c0dcf1d51ed4cf890fd6e51add87b4fa471e8469755dda4a54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:10:20 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2526af09cd7312c0dcf1d51ed4cf890fd6e51add87b4fa471e8469755dda4a54/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:10:20 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2526af09cd7312c0dcf1d51ed4cf890fd6e51add87b4fa471e8469755dda4a54/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:10:20 np0005603787 podman[194972]: 2026-01-31 10:10:20.926311335 +0000 UTC m=+0.119581418 container init 2ff98edd522be15f5cdd034df360e8ac651d0b91bc867e0d60edaef585ee5bee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_fermi, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:10:20 np0005603787 podman[194972]: 2026-01-31 10:10:20.829069013 +0000 UTC m=+0.022339116 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:10:20 np0005603787 podman[194972]: 2026-01-31 10:10:20.94186861 +0000 UTC m=+0.135138713 container start 2ff98edd522be15f5cdd034df360e8ac651d0b91bc867e0d60edaef585ee5bee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:10:20 np0005603787 podman[194972]: 2026-01-31 10:10:20.946637574 +0000 UTC m=+0.139907727 container attach 2ff98edd522be15f5cdd034df360e8ac651d0b91bc867e0d60edaef585ee5bee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_fermi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 31 05:10:20 np0005603787 python3.9[194966]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:10:21 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:10:21 np0005603787 naughty_fermi[194989]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:10:21 np0005603787 naughty_fermi[194989]: --> All data devices are unavailable
Jan 31 05:10:21 np0005603787 systemd[1]: libpod-2ff98edd522be15f5cdd034df360e8ac651d0b91bc867e0d60edaef585ee5bee.scope: Deactivated successfully.
Jan 31 05:10:21 np0005603787 podman[194972]: 2026-01-31 10:10:21.374174062 +0000 UTC m=+0.567444145 container died 2ff98edd522be15f5cdd034df360e8ac651d0b91bc867e0d60edaef585ee5bee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_fermi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:10:21 np0005603787 systemd[1]: var-lib-containers-storage-overlay-2526af09cd7312c0dcf1d51ed4cf890fd6e51add87b4fa471e8469755dda4a54-merged.mount: Deactivated successfully.
Jan 31 05:10:21 np0005603787 podman[194972]: 2026-01-31 10:10:21.418474733 +0000 UTC m=+0.611744806 container remove 2ff98edd522be15f5cdd034df360e8ac651d0b91bc867e0d60edaef585ee5bee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_fermi, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:10:21 np0005603787 systemd[1]: libpod-conmon-2ff98edd522be15f5cdd034df360e8ac651d0b91bc867e0d60edaef585ee5bee.scope: Deactivated successfully.
Jan 31 05:10:21 np0005603787 python3.9[195160]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:10:21 np0005603787 podman[195266]: 2026-01-31 10:10:21.75882298 +0000 UTC m=+0.040973967 container create b31f8e0b1f812c112a0e927810e858034b4e971fb62890ea1923286bd253c53a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_zhukovsky, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 05:10:21 np0005603787 systemd[1]: Started libpod-conmon-b31f8e0b1f812c112a0e927810e858034b4e971fb62890ea1923286bd253c53a.scope.
Jan 31 05:10:21 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:10:21 np0005603787 podman[195266]: 2026-01-31 10:10:21.829269452 +0000 UTC m=+0.111420459 container init b31f8e0b1f812c112a0e927810e858034b4e971fb62890ea1923286bd253c53a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_zhukovsky, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 05:10:21 np0005603787 podman[195266]: 2026-01-31 10:10:21.736070344 +0000 UTC m=+0.018221351 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:10:21 np0005603787 podman[195266]: 2026-01-31 10:10:21.834672564 +0000 UTC m=+0.116823571 container start b31f8e0b1f812c112a0e927810e858034b4e971fb62890ea1923286bd253c53a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_zhukovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:10:21 np0005603787 optimistic_zhukovsky[195327]: 167 167
Jan 31 05:10:21 np0005603787 podman[195266]: 2026-01-31 10:10:21.838049288 +0000 UTC m=+0.120200285 container attach b31f8e0b1f812c112a0e927810e858034b4e971fb62890ea1923286bd253c53a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_zhukovsky, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:10:21 np0005603787 systemd[1]: libpod-b31f8e0b1f812c112a0e927810e858034b4e971fb62890ea1923286bd253c53a.scope: Deactivated successfully.
Jan 31 05:10:21 np0005603787 podman[195266]: 2026-01-31 10:10:21.839222391 +0000 UTC m=+0.121373388 container died b31f8e0b1f812c112a0e927810e858034b4e971fb62890ea1923286bd253c53a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:10:21 np0005603787 systemd[1]: var-lib-containers-storage-overlay-a2f474e25c7b495b4cd36e2bc297c442497eb4118f19d3ad3e4aa7cd1141f934-merged.mount: Deactivated successfully.
Jan 31 05:10:21 np0005603787 podman[195266]: 2026-01-31 10:10:21.881665649 +0000 UTC m=+0.163816636 container remove b31f8e0b1f812c112a0e927810e858034b4e971fb62890ea1923286bd253c53a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_zhukovsky, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 05:10:21 np0005603787 systemd[1]: libpod-conmon-b31f8e0b1f812c112a0e927810e858034b4e971fb62890ea1923286bd253c53a.scope: Deactivated successfully.
Jan 31 05:10:22 np0005603787 podman[195400]: 2026-01-31 10:10:22.04888611 +0000 UTC m=+0.048605242 container create 0537529a13ac353943ba9c23f27848f0364a32a18ed18fb5653ec2bf663bf8e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 05:10:22 np0005603787 systemd[1]: Started libpod-conmon-0537529a13ac353943ba9c23f27848f0364a32a18ed18fb5653ec2bf663bf8e8.scope.
Jan 31 05:10:22 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:10:22 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aefbc7d738b66fd228e7248be3cadc308479ecbf1b7f49422ec79a16e9a70c2b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:10:22 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aefbc7d738b66fd228e7248be3cadc308479ecbf1b7f49422ec79a16e9a70c2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:10:22 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aefbc7d738b66fd228e7248be3cadc308479ecbf1b7f49422ec79a16e9a70c2b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:10:22 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aefbc7d738b66fd228e7248be3cadc308479ecbf1b7f49422ec79a16e9a70c2b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:10:22 np0005603787 podman[195400]: 2026-01-31 10:10:22.030849795 +0000 UTC m=+0.030568927 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:10:22 np0005603787 podman[195400]: 2026-01-31 10:10:22.137268734 +0000 UTC m=+0.136987876 container init 0537529a13ac353943ba9c23f27848f0364a32a18ed18fb5653ec2bf663bf8e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_ganguly, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 05:10:22 np0005603787 podman[195400]: 2026-01-31 10:10:22.143049746 +0000 UTC m=+0.142768908 container start 0537529a13ac353943ba9c23f27848f0364a32a18ed18fb5653ec2bf663bf8e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_ganguly, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:10:22 np0005603787 podman[195400]: 2026-01-31 10:10:22.14781476 +0000 UTC m=+0.147533902 container attach 0537529a13ac353943ba9c23f27848f0364a32a18ed18fb5653ec2bf663bf8e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_ganguly, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:10:22 np0005603787 python3.9[195437]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]: {
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:    "0": [
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:        {
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:            "devices": [
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "/dev/loop3"
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:            ],
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:            "lv_name": "ceph_lv0",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:            "lv_size": "21470642176",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:            "name": "ceph_lv0",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:            "tags": {
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "ceph.cluster_name": "ceph",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "ceph.crush_device_class": "",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "ceph.encrypted": "0",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "ceph.objectstore": "bluestore",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "ceph.osd_id": "0",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "ceph.type": "block",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "ceph.vdo": "0",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "ceph.with_tpm": "0"
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:            },
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:            "type": "block",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:            "vg_name": "ceph_vg0"
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:        }
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:    ],
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:    "1": [
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:        {
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:            "devices": [
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "/dev/loop4"
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:            ],
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:            "lv_name": "ceph_lv1",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:            "lv_size": "21470642176",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:            "name": "ceph_lv1",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:            "tags": {
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "ceph.cluster_name": "ceph",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "ceph.crush_device_class": "",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "ceph.encrypted": "0",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "ceph.objectstore": "bluestore",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "ceph.osd_id": "1",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "ceph.type": "block",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "ceph.vdo": "0",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "ceph.with_tpm": "0"
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:            },
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:            "type": "block",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:            "vg_name": "ceph_vg1"
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:        }
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:    ],
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:    "2": [
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:        {
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:            "devices": [
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "/dev/loop5"
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:            ],
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:            "lv_name": "ceph_lv2",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:            "lv_size": "21470642176",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:            "name": "ceph_lv2",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:            "tags": {
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "ceph.cluster_name": "ceph",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "ceph.crush_device_class": "",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "ceph.encrypted": "0",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "ceph.objectstore": "bluestore",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "ceph.osd_id": "2",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "ceph.type": "block",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "ceph.vdo": "0",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:                "ceph.with_tpm": "0"
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:            },
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:            "type": "block",
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:            "vg_name": "ceph_vg2"
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:        }
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]:    ]
Jan 31 05:10:22 np0005603787 youthful_ganguly[195443]: }
Jan 31 05:10:22 np0005603787 systemd[1]: libpod-0537529a13ac353943ba9c23f27848f0364a32a18ed18fb5653ec2bf663bf8e8.scope: Deactivated successfully.
Jan 31 05:10:22 np0005603787 podman[195400]: 2026-01-31 10:10:22.448201078 +0000 UTC m=+0.447920200 container died 0537529a13ac353943ba9c23f27848f0364a32a18ed18fb5653ec2bf663bf8e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_ganguly, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:10:22 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:10:22 np0005603787 systemd[1]: var-lib-containers-storage-overlay-aefbc7d738b66fd228e7248be3cadc308479ecbf1b7f49422ec79a16e9a70c2b-merged.mount: Deactivated successfully.
Jan 31 05:10:22 np0005603787 podman[195400]: 2026-01-31 10:10:22.670903552 +0000 UTC m=+0.670622694 container remove 0537529a13ac353943ba9c23f27848f0364a32a18ed18fb5653ec2bf663bf8e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_ganguly, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:10:22 np0005603787 systemd[1]: libpod-conmon-0537529a13ac353943ba9c23f27848f0364a32a18ed18fb5653ec2bf663bf8e8.scope: Deactivated successfully.
Jan 31 05:10:23 np0005603787 python3.9[195664]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:10:23 np0005603787 podman[195677]: 2026-01-31 10:10:23.113015239 +0000 UTC m=+0.036578224 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:10:23 np0005603787 podman[195677]: 2026-01-31 10:10:23.298343067 +0000 UTC m=+0.221906052 container create c2d856342280d6c4b859056093364d519e0e348f69b76adb72e432ece375c887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 05:10:23 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:10:23 np0005603787 systemd[1]: Started libpod-conmon-c2d856342280d6c4b859056093364d519e0e348f69b76adb72e432ece375c887.scope.
Jan 31 05:10:23 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:10:23 np0005603787 podman[195677]: 2026-01-31 10:10:23.53134907 +0000 UTC m=+0.454912085 container init c2d856342280d6c4b859056093364d519e0e348f69b76adb72e432ece375c887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_brahmagupta, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:10:23 np0005603787 podman[195677]: 2026-01-31 10:10:23.539771916 +0000 UTC m=+0.463334901 container start c2d856342280d6c4b859056093364d519e0e348f69b76adb72e432ece375c887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_brahmagupta, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 05:10:23 np0005603787 peaceful_brahmagupta[195749]: 167 167
Jan 31 05:10:23 np0005603787 systemd[1]: libpod-c2d856342280d6c4b859056093364d519e0e348f69b76adb72e432ece375c887.scope: Deactivated successfully.
Jan 31 05:10:23 np0005603787 conmon[195749]: conmon c2d856342280d6c4b859 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c2d856342280d6c4b859056093364d519e0e348f69b76adb72e432ece375c887.scope/container/memory.events
Jan 31 05:10:23 np0005603787 podman[195677]: 2026-01-31 10:10:23.625954118 +0000 UTC m=+0.549517153 container attach c2d856342280d6c4b859056093364d519e0e348f69b76adb72e432ece375c887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_brahmagupta, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:10:23 np0005603787 podman[195677]: 2026-01-31 10:10:23.626555926 +0000 UTC m=+0.550118911 container died c2d856342280d6c4b859056093364d519e0e348f69b76adb72e432ece375c887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:10:23 np0005603787 systemd[1]: var-lib-containers-storage-overlay-6985029bbc1170a984a36e992b1880eb2fa8d27810812d5773854a4532a48e41-merged.mount: Deactivated successfully.
Jan 31 05:10:23 np0005603787 podman[195677]: 2026-01-31 10:10:23.726635007 +0000 UTC m=+0.650197942 container remove c2d856342280d6c4b859056093364d519e0e348f69b76adb72e432ece375c887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 05:10:23 np0005603787 systemd[1]: libpod-conmon-c2d856342280d6c4b859056093364d519e0e348f69b76adb72e432ece375c887.scope: Deactivated successfully.
Jan 31 05:10:23 np0005603787 python3.9[195835]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769854222.48948-557-39420739820761/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:10:23 np0005603787 podman[195844]: 2026-01-31 10:10:23.855915556 +0000 UTC m=+0.040390511 container create 7163f3c20ce8e84681c37e0acf3335533aca7349e980054d8189d7ac156937a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_margulis, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 05:10:23 np0005603787 systemd[1]: Started libpod-conmon-7163f3c20ce8e84681c37e0acf3335533aca7349e980054d8189d7ac156937a9.scope.
Jan 31 05:10:23 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:10:23 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/014c3e274cc14393620293b62966154c5e056cef424271c22884b1ffc82a4979/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:10:23 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/014c3e274cc14393620293b62966154c5e056cef424271c22884b1ffc82a4979/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:10:23 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/014c3e274cc14393620293b62966154c5e056cef424271c22884b1ffc82a4979/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:10:23 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/014c3e274cc14393620293b62966154c5e056cef424271c22884b1ffc82a4979/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:10:23 np0005603787 podman[195844]: 2026-01-31 10:10:23.836686188 +0000 UTC m=+0.021161233 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:10:23 np0005603787 podman[195844]: 2026-01-31 10:10:23.941263156 +0000 UTC m=+0.125738151 container init 7163f3c20ce8e84681c37e0acf3335533aca7349e980054d8189d7ac156937a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_margulis, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 05:10:23 np0005603787 podman[195844]: 2026-01-31 10:10:23.949723002 +0000 UTC m=+0.134197967 container start 7163f3c20ce8e84681c37e0acf3335533aca7349e980054d8189d7ac156937a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 05:10:23 np0005603787 podman[195844]: 2026-01-31 10:10:23.962096548 +0000 UTC m=+0.146571553 container attach 7163f3c20ce8e84681c37e0acf3335533aca7349e980054d8189d7ac156937a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_margulis, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:10:24 np0005603787 python3.9[196026]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:10:24 np0005603787 lvm[196163]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:10:24 np0005603787 lvm[196162]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:10:24 np0005603787 lvm[196163]: VG ceph_vg1 finished
Jan 31 05:10:24 np0005603787 lvm[196162]: VG ceph_vg0 finished
Jan 31 05:10:24 np0005603787 lvm[196177]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:10:24 np0005603787 lvm[196177]: VG ceph_vg2 finished
Jan 31 05:10:24 np0005603787 lvm[196186]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:10:24 np0005603787 lvm[196186]: VG ceph_vg0 finished
Jan 31 05:10:24 np0005603787 blissful_margulis[195884]: {}
Jan 31 05:10:24 np0005603787 systemd[1]: libpod-7163f3c20ce8e84681c37e0acf3335533aca7349e980054d8189d7ac156937a9.scope: Deactivated successfully.
Jan 31 05:10:24 np0005603787 systemd[1]: libpod-7163f3c20ce8e84681c37e0acf3335533aca7349e980054d8189d7ac156937a9.scope: Consumed 1.018s CPU time.
Jan 31 05:10:24 np0005603787 podman[195844]: 2026-01-31 10:10:24.727426383 +0000 UTC m=+0.911901358 container died 7163f3c20ce8e84681c37e0acf3335533aca7349e980054d8189d7ac156937a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_margulis, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:10:24 np0005603787 systemd[1]: var-lib-containers-storage-overlay-014c3e274cc14393620293b62966154c5e056cef424271c22884b1ffc82a4979-merged.mount: Deactivated successfully.
Jan 31 05:10:24 np0005603787 podman[195844]: 2026-01-31 10:10:24.782025131 +0000 UTC m=+0.966500086 container remove 7163f3c20ce8e84681c37e0acf3335533aca7349e980054d8189d7ac156937a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_margulis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 05:10:24 np0005603787 systemd[1]: libpod-conmon-7163f3c20ce8e84681c37e0acf3335533aca7349e980054d8189d7ac156937a9.scope: Deactivated successfully.
Jan 31 05:10:24 np0005603787 python3.9[196220]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769854223.93481-557-55256666568362/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:10:24 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:10:24 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:10:24 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:10:24 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:10:25 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:10:25 np0005603787 python3.9[196412]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:10:25 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:10:25 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:10:26 np0005603787 python3.9[196537]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769854225.0933857-557-210558786812252/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:10:26 np0005603787 python3.9[196689]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:10:27 np0005603787 python3.9[196814]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769854226.19476-557-8462895342623/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:10:27 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:10:27 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:10:27 np0005603787 python3.9[196966]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:10:28 np0005603787 python3.9[197091]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769854227.251495-557-56304878542371/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:10:28 np0005603787 python3.9[197243]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:10:29 np0005603787 python3.9[197368]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769854228.297309-557-119296903091090/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:10:29 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:10:29 np0005603787 python3.9[197520]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:10:30 np0005603787 python3.9[197643]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769854229.3427637-557-106196902433160/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:10:30 np0005603787 python3.9[197795]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:10:31 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:10:31 np0005603787 python3.9[197920]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769854230.3951-557-277568036592392/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:10:31 np0005603787 python3.9[198072]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 31 05:10:32 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:10:32 np0005603787 python3.9[198225]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:10:33 np0005603787 python3.9[198377]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:10:33 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:10:33 np0005603787 python3.9[198529]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:10:34 np0005603787 python3.9[198681]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:10:34 np0005603787 python3.9[198833]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:10:35 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:10:35 np0005603787 python3.9[198985]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:10:36 np0005603787 python3.9[199137]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:10:36 np0005603787 python3.9[199289]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:10:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:10:37.048 154765 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:10:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:10:37.049 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:10:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:10:37.050 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:10:37 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:10:37 np0005603787 python3.9[199441]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:10:37 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:10:37 np0005603787 python3.9[199593]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:10:38 np0005603787 podman[199717]: 2026-01-31 10:10:38.410032292 +0000 UTC m=+0.066928764 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 05:10:38 np0005603787 python3.9[199763]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:10:39 np0005603787 python3.9[199917]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:10:39 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:10:39 np0005603787 podman[200041]: 2026-01-31 10:10:39.557006341 +0000 UTC m=+0.076760770 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 05:10:39 np0005603787 python3.9[200081]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:10:40 np0005603787 python3.9[200247]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:10:40 np0005603787 python3.9[200399]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:10:41 np0005603787 python3.9[200522]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769854240.3663735-778-254403155371752/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:10:41 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:10:41 np0005603787 python3.9[200674]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:10:42 np0005603787 python3.9[200797]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769854241.4085844-778-170611034859195/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:10:42 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:10:42 np0005603787 python3.9[200949]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:10:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_10:10:43
Jan 31 05:10:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:10:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] do_upmap
Jan 31 05:10:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'backups', 'default.rgw.log', 'vms', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'images', '.mgr']
Jan 31 05:10:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:10:43 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:10:43 np0005603787 python3.9[201072]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769854242.469009-778-40417449208110/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:10:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:10:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:10:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:10:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:10:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:10:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:10:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:10:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:10:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:10:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:10:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:10:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:10:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:10:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:10:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:10:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:10:44 np0005603787 python3.9[201224]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:10:44 np0005603787 python3.9[201347]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769854243.5466657-778-22490597994913/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:10:45 np0005603787 python3.9[201499]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:10:45 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:10:45 np0005603787 python3.9[201622]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769854244.7347293-778-214401246632589/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:10:46 np0005603787 python3.9[201774]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:10:46 np0005603787 python3.9[201897]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769854245.8386867-778-230080844202723/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:10:47 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:10:47 np0005603787 python3.9[202049]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:10:47 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:10:47 np0005603787 python3.9[202172]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769854247.0880387-778-158009697110613/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:10:48 np0005603787 python3.9[202324]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:10:48 np0005603787 python3.9[202447]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769854248.0785515-778-172593671915876/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:10:49 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:10:49 np0005603787 python3.9[202599]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:10:49 np0005603787 python3.9[202722]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769854249.1244047-778-95494485621831/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:10:50 np0005603787 python3.9[202874]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:10:50 np0005603787 python3.9[202997]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769854250.0845854-778-267620313263911/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:10:51 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:10:51 np0005603787 python3.9[203149]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:10:52 np0005603787 python3.9[203272]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769854251.1539986-778-119450363210563/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:10:52 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:10:52 np0005603787 python3.9[203424]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:10:52 np0005603787 python3.9[203547]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769854252.1304853-778-85025984039680/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:10:53 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:10:53 np0005603787 python3.9[203699]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:10:54 np0005603787 python3.9[203822]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769854253.0939784-778-10960455380883/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:10:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:10:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:10:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:10:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:10:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:10:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:10:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:10:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:10:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:10:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:10:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:10:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:10:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1786947556520692e-06 of space, bias 4.0, pg target 0.0014144337067824831 quantized to 16 (current 16)
Jan 31 05:10:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:10:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:10:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:10:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:10:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:10:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 05:10:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:10:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:10:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:10:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:10:54 np0005603787 python3.9[203974]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:10:55 np0005603787 python3.9[204097]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769854254.1500158-778-34263071121563/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:10:55 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:10:55 np0005603787 python3.9[204247]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:10:56 np0005603787 python3.9[204402]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 31 05:10:57 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:10:57 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:10:57 np0005603787 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 31 05:10:58 np0005603787 python3.9[204558]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:10:58 np0005603787 python3.9[204710]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:10:59 np0005603787 python3.9[204862]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:10:59 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:10:59 np0005603787 python3.9[205014]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:10:59 np0005603787 auditd[703]: Audit daemon rotating log files
Jan 31 05:11:00 np0005603787 python3.9[205166]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:11:00 np0005603787 python3.9[205318]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:11:01 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:11:01 np0005603787 python3.9[205470]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:11:02 np0005603787 python3.9[205622]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:11:02 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:11:02 np0005603787 python3.9[205774]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:11:03 np0005603787 python3.9[205926]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:11:03 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:11:04 np0005603787 python3.9[206078]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 05:11:04 np0005603787 systemd[1]: Reloading.
Jan 31 05:11:04 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:11:04 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:11:04 np0005603787 systemd[1]: Starting libvirt logging daemon socket...
Jan 31 05:11:04 np0005603787 systemd[1]: Listening on libvirt logging daemon socket.
Jan 31 05:11:04 np0005603787 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 31 05:11:04 np0005603787 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 31 05:11:04 np0005603787 systemd[1]: Starting libvirt logging daemon...
Jan 31 05:11:04 np0005603787 systemd[1]: Started libvirt logging daemon.
Jan 31 05:11:05 np0005603787 python3.9[206270]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 05:11:05 np0005603787 systemd[1]: Reloading.
Jan 31 05:11:05 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:11:05 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:11:05 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:11:05 np0005603787 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 31 05:11:05 np0005603787 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 31 05:11:05 np0005603787 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 31 05:11:05 np0005603787 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 31 05:11:05 np0005603787 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 31 05:11:05 np0005603787 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 31 05:11:05 np0005603787 systemd[1]: Starting libvirt nodedev daemon...
Jan 31 05:11:05 np0005603787 systemd[1]: Started libvirt nodedev daemon.
Jan 31 05:11:06 np0005603787 python3.9[206486]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 05:11:06 np0005603787 systemd[1]: Reloading.
Jan 31 05:11:06 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:11:06 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:11:06 np0005603787 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 31 05:11:06 np0005603787 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 31 05:11:06 np0005603787 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 31 05:11:06 np0005603787 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 31 05:11:06 np0005603787 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 31 05:11:06 np0005603787 systemd[1]: Starting libvirt proxy daemon...
Jan 31 05:11:06 np0005603787 systemd[1]: Started libvirt proxy daemon.
Jan 31 05:11:07 np0005603787 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 31 05:11:07 np0005603787 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 31 05:11:07 np0005603787 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 31 05:11:07 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:11:07 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:11:07 np0005603787 python3.9[206702]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 05:11:07 np0005603787 systemd[1]: Reloading.
Jan 31 05:11:07 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:11:07 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:11:07 np0005603787 systemd[1]: Listening on libvirt locking daemon socket.
Jan 31 05:11:07 np0005603787 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 31 05:11:07 np0005603787 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 31 05:11:07 np0005603787 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 31 05:11:07 np0005603787 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 31 05:11:07 np0005603787 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 31 05:11:07 np0005603787 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 31 05:11:07 np0005603787 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 31 05:11:07 np0005603787 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 31 05:11:07 np0005603787 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 31 05:11:07 np0005603787 systemd[1]: Starting libvirt QEMU daemon...
Jan 31 05:11:08 np0005603787 systemd[1]: Started libvirt QEMU daemon.
Jan 31 05:11:08 np0005603787 setroubleshoot[206524]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l bacb9310-ec35-4f7c-9b33-53983af7b6da
Jan 31 05:11:08 np0005603787 setroubleshoot[206524]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Jan 31 05:11:08 np0005603787 setroubleshoot[206524]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l bacb9310-ec35-4f7c-9b33-53983af7b6da
Jan 31 05:11:08 np0005603787 setroubleshoot[206524]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Jan 31 05:11:08 np0005603787 podman[206924]: 2026-01-31 10:11:08.500756472 +0000 UTC m=+0.056074058 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 05:11:08 np0005603787 python3.9[206925]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 05:11:08 np0005603787 systemd[1]: Reloading.
Jan 31 05:11:08 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:11:08 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:11:09 np0005603787 systemd[1]: Starting libvirt secret daemon socket...
Jan 31 05:11:09 np0005603787 systemd[1]: Listening on libvirt secret daemon socket.
Jan 31 05:11:09 np0005603787 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 31 05:11:09 np0005603787 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 31 05:11:09 np0005603787 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 31 05:11:09 np0005603787 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 31 05:11:09 np0005603787 systemd[1]: Starting libvirt secret daemon...
Jan 31 05:11:09 np0005603787 systemd[1]: Started libvirt secret daemon.
Jan 31 05:11:09 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:11:09 np0005603787 podman[207154]: 2026-01-31 10:11:09.686984869 +0000 UTC m=+0.073130874 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 05:11:09 np0005603787 python3.9[207155]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:11:10 np0005603787 python3.9[207332]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 05:11:10 np0005603787 python3.9[207484]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:11:11 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:11:11 np0005603787 python3.9[207638]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 05:11:12 np0005603787 python3.9[207788]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:11:12 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:11:12 np0005603787 python3.9[207909]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769854271.8669906-1136-143058268679797/.source.xml follow=False _original_basename=secret.xml.j2 checksum=3d27dfd3529f8944173fc0a6237cef945432dd5b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:11:13 np0005603787 python3.9[208061]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 962d77ae-dc67-5de8-89d8-3d1670c67b61#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:11:13 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:11:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:11:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:11:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:11:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:11:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:11:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:11:14 np0005603787 python3.9[208223]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:11:15 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:11:16 np0005603787 python3.9[208686]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:11:16 np0005603787 python3.9[208838]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:11:17 np0005603787 python3.9[208961]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769854276.285455-1191-110672314318836/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:11:17 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:11:17 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:11:17 np0005603787 python3.9[209113]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:11:18 np0005603787 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 31 05:11:18 np0005603787 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 31 05:11:18 np0005603787 python3.9[209265]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:11:18 np0005603787 python3.9[209343]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:11:19 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:11:19 np0005603787 python3.9[209495]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:11:19 np0005603787 python3.9[209573]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.qhdu5new recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:11:20 np0005603787 python3.9[209725]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:11:20 np0005603787 python3.9[209803]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:11:21 np0005603787 python3.9[209955]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:11:21 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:11:22 np0005603787 python3[210108]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 05:11:22 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:11:22 np0005603787 python3.9[210260]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:11:23 np0005603787 python3.9[210338]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:11:23 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:11:23 np0005603787 python3.9[210490]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:11:24 np0005603787 python3.9[210615]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769854283.2639635-1280-148004721661998/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:11:24 np0005603787 python3.9[210767]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:11:25 np0005603787 python3.9[210895]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:11:25 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:11:25 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:11:25 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:11:25 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:11:25 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:11:25 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:11:25 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:11:25 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:11:25 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:11:25 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:11:25 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:11:25 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:11:25 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:11:25 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:11:25 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:11:25 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:11:25 np0005603787 python3.9[211128]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:11:25 np0005603787 podman[211141]: 2026-01-31 10:11:25.933296725 +0000 UTC m=+0.044866638 container create 16176b5d0306ad2adb2f644a9874170d600962de9a4462094ae34f91dfcc77c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:11:25 np0005603787 systemd[1]: Started libpod-conmon-16176b5d0306ad2adb2f644a9874170d600962de9a4462094ae34f91dfcc77c9.scope.
Jan 31 05:11:26 np0005603787 podman[211141]: 2026-01-31 10:11:25.912727619 +0000 UTC m=+0.024297592 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:11:26 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:11:26 np0005603787 podman[211141]: 2026-01-31 10:11:26.023801172 +0000 UTC m=+0.135371125 container init 16176b5d0306ad2adb2f644a9874170d600962de9a4462094ae34f91dfcc77c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_albattani, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:11:26 np0005603787 podman[211141]: 2026-01-31 10:11:26.030034859 +0000 UTC m=+0.141604762 container start 16176b5d0306ad2adb2f644a9874170d600962de9a4462094ae34f91dfcc77c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_albattani, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:11:26 np0005603787 podman[211141]: 2026-01-31 10:11:26.033056395 +0000 UTC m=+0.144626358 container attach 16176b5d0306ad2adb2f644a9874170d600962de9a4462094ae34f91dfcc77c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_albattani, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:11:26 np0005603787 cool_albattani[211160]: 167 167
Jan 31 05:11:26 np0005603787 systemd[1]: libpod-16176b5d0306ad2adb2f644a9874170d600962de9a4462094ae34f91dfcc77c9.scope: Deactivated successfully.
Jan 31 05:11:26 np0005603787 podman[211141]: 2026-01-31 10:11:26.03495002 +0000 UTC m=+0.146519943 container died 16176b5d0306ad2adb2f644a9874170d600962de9a4462094ae34f91dfcc77c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 05:11:26 np0005603787 systemd[1]: var-lib-containers-storage-overlay-d262ee551beee5eaea9eca8527bfc9fc799ee750cfadef5d3ae7347bd16f7f00-merged.mount: Deactivated successfully.
Jan 31 05:11:26 np0005603787 podman[211141]: 2026-01-31 10:11:26.07114297 +0000 UTC m=+0.182712883 container remove 16176b5d0306ad2adb2f644a9874170d600962de9a4462094ae34f91dfcc77c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 05:11:26 np0005603787 systemd[1]: libpod-conmon-16176b5d0306ad2adb2f644a9874170d600962de9a4462094ae34f91dfcc77c9.scope: Deactivated successfully.
Jan 31 05:11:26 np0005603787 podman[211257]: 2026-01-31 10:11:26.213040251 +0000 UTC m=+0.054340219 container create d822eaa5a8aba772d1892d5cbd5af3fa11377e56293c7a9e45429c30ce5d6200 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 05:11:26 np0005603787 systemd[1]: Started libpod-conmon-d822eaa5a8aba772d1892d5cbd5af3fa11377e56293c7a9e45429c30ce5d6200.scope.
Jan 31 05:11:26 np0005603787 podman[211257]: 2026-01-31 10:11:26.184315362 +0000 UTC m=+0.025615410 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:11:26 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:11:26 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1283b86216adb273e9411b20ca902f42470dd37ad546a40c9ccf2d9953e45bfe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:11:26 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1283b86216adb273e9411b20ca902f42470dd37ad546a40c9ccf2d9953e45bfe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:11:26 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1283b86216adb273e9411b20ca902f42470dd37ad546a40c9ccf2d9953e45bfe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:11:26 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1283b86216adb273e9411b20ca902f42470dd37ad546a40c9ccf2d9953e45bfe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:11:26 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1283b86216adb273e9411b20ca902f42470dd37ad546a40c9ccf2d9953e45bfe/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:11:26 np0005603787 podman[211257]: 2026-01-31 10:11:26.333025786 +0000 UTC m=+0.174325734 container init d822eaa5a8aba772d1892d5cbd5af3fa11377e56293c7a9e45429c30ce5d6200 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_lewin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 05:11:26 np0005603787 python3.9[211265]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:11:26 np0005603787 podman[211257]: 2026-01-31 10:11:26.343001341 +0000 UTC m=+0.184301319 container start d822eaa5a8aba772d1892d5cbd5af3fa11377e56293c7a9e45429c30ce5d6200 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_lewin, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 05:11:26 np0005603787 podman[211257]: 2026-01-31 10:11:26.346910182 +0000 UTC m=+0.188210220 container attach d822eaa5a8aba772d1892d5cbd5af3fa11377e56293c7a9e45429c30ce5d6200 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_lewin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 05:11:26 np0005603787 awesome_lewin[211275]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:11:26 np0005603787 awesome_lewin[211275]: --> All data devices are unavailable
Jan 31 05:11:26 np0005603787 systemd[1]: libpod-d822eaa5a8aba772d1892d5cbd5af3fa11377e56293c7a9e45429c30ce5d6200.scope: Deactivated successfully.
Jan 31 05:11:26 np0005603787 podman[211257]: 2026-01-31 10:11:26.740382086 +0000 UTC m=+0.581682054 container died d822eaa5a8aba772d1892d5cbd5af3fa11377e56293c7a9e45429c30ce5d6200 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 05:11:26 np0005603787 systemd[1]: var-lib-containers-storage-overlay-1283b86216adb273e9411b20ca902f42470dd37ad546a40c9ccf2d9953e45bfe-merged.mount: Deactivated successfully.
Jan 31 05:11:26 np0005603787 podman[211257]: 2026-01-31 10:11:26.782449403 +0000 UTC m=+0.623749361 container remove d822eaa5a8aba772d1892d5cbd5af3fa11377e56293c7a9e45429c30ce5d6200 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_lewin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:11:26 np0005603787 systemd[1]: libpod-conmon-d822eaa5a8aba772d1892d5cbd5af3fa11377e56293c7a9e45429c30ce5d6200.scope: Deactivated successfully.
Jan 31 05:11:27 np0005603787 python3.9[211459]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:11:27 np0005603787 podman[211538]: 2026-01-31 10:11:27.168743893 +0000 UTC m=+0.039723142 container create 84e71b00c82c7491a6f7a7a86387828bc0408900d11bc76f128481b5134529b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 05:11:27 np0005603787 systemd[1]: Started libpod-conmon-84e71b00c82c7491a6f7a7a86387828bc0408900d11bc76f128481b5134529b9.scope.
Jan 31 05:11:27 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:11:27 np0005603787 podman[211538]: 2026-01-31 10:11:27.152391868 +0000 UTC m=+0.023371097 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:11:27 np0005603787 podman[211538]: 2026-01-31 10:11:27.25184515 +0000 UTC m=+0.122824399 container init 84e71b00c82c7491a6f7a7a86387828bc0408900d11bc76f128481b5134529b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:11:27 np0005603787 podman[211538]: 2026-01-31 10:11:27.257564182 +0000 UTC m=+0.128543431 container start 84e71b00c82c7491a6f7a7a86387828bc0408900d11bc76f128481b5134529b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:11:27 np0005603787 jolly_lamarr[211588]: 167 167
Jan 31 05:11:27 np0005603787 systemd[1]: libpod-84e71b00c82c7491a6f7a7a86387828bc0408900d11bc76f128481b5134529b9.scope: Deactivated successfully.
Jan 31 05:11:27 np0005603787 podman[211538]: 2026-01-31 10:11:27.261614548 +0000 UTC m=+0.132593797 container attach 84e71b00c82c7491a6f7a7a86387828bc0408900d11bc76f128481b5134529b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_lamarr, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:11:27 np0005603787 conmon[211588]: conmon 84e71b00c82c7491a6f7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-84e71b00c82c7491a6f7a7a86387828bc0408900d11bc76f128481b5134529b9.scope/container/memory.events
Jan 31 05:11:27 np0005603787 podman[211538]: 2026-01-31 10:11:27.262361259 +0000 UTC m=+0.133340478 container died 84e71b00c82c7491a6f7a7a86387828bc0408900d11bc76f128481b5134529b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:11:27 np0005603787 systemd[1]: var-lib-containers-storage-overlay-b332552a26f03f6a77913cd992eee8011cd96d1022ccc2c1762ca7bf89d68b8f-merged.mount: Deactivated successfully.
Jan 31 05:11:27 np0005603787 podman[211538]: 2026-01-31 10:11:27.307649059 +0000 UTC m=+0.178628308 container remove 84e71b00c82c7491a6f7a7a86387828bc0408900d11bc76f128481b5134529b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 05:11:27 np0005603787 systemd[1]: libpod-conmon-84e71b00c82c7491a6f7a7a86387828bc0408900d11bc76f128481b5134529b9.scope: Deactivated successfully.
Jan 31 05:11:27 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:11:27 np0005603787 podman[211688]: 2026-01-31 10:11:27.473195472 +0000 UTC m=+0.047045711 container create b1b1c41f7d3cdf62360885388cc3e897524057960235ac530b5e1f9058dcdc4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 31 05:11:27 np0005603787 systemd[1]: Started libpod-conmon-b1b1c41f7d3cdf62360885388cc3e897524057960235ac530b5e1f9058dcdc4a.scope.
Jan 31 05:11:27 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:11:27 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a07ea8d4198d4edf293f4de0831b190876e18b8c0ffca96cd26fe8cad09f1a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:11:27 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a07ea8d4198d4edf293f4de0831b190876e18b8c0ffca96cd26fe8cad09f1a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:11:27 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a07ea8d4198d4edf293f4de0831b190876e18b8c0ffca96cd26fe8cad09f1a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:11:27 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a07ea8d4198d4edf293f4de0831b190876e18b8c0ffca96cd26fe8cad09f1a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:11:27 np0005603787 podman[211688]: 2026-01-31 10:11:27.455755385 +0000 UTC m=+0.029605654 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:11:27 np0005603787 python3.9[211682]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769854286.5152025-1319-143468730245008/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:11:27 np0005603787 podman[211688]: 2026-01-31 10:11:27.558406629 +0000 UTC m=+0.132256898 container init b1b1c41f7d3cdf62360885388cc3e897524057960235ac530b5e1f9058dcdc4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 05:11:27 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:11:27 np0005603787 podman[211688]: 2026-01-31 10:11:27.57464173 +0000 UTC m=+0.148491999 container start b1b1c41f7d3cdf62360885388cc3e897524057960235ac530b5e1f9058dcdc4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_torvalds, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:11:27 np0005603787 podman[211688]: 2026-01-31 10:11:27.578717787 +0000 UTC m=+0.152568086 container attach b1b1c41f7d3cdf62360885388cc3e897524057960235ac530b5e1f9058dcdc4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]: {
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:    "0": [
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:        {
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:            "devices": [
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "/dev/loop3"
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:            ],
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:            "lv_name": "ceph_lv0",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:            "lv_size": "21470642176",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:            "name": "ceph_lv0",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:            "tags": {
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "ceph.cluster_name": "ceph",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "ceph.crush_device_class": "",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "ceph.encrypted": "0",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "ceph.objectstore": "bluestore",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "ceph.osd_id": "0",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "ceph.type": "block",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "ceph.vdo": "0",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "ceph.with_tpm": "0"
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:            },
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:            "type": "block",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:            "vg_name": "ceph_vg0"
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:        }
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:    ],
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:    "1": [
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:        {
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:            "devices": [
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "/dev/loop4"
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:            ],
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:            "lv_name": "ceph_lv1",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:            "lv_size": "21470642176",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:            "name": "ceph_lv1",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:            "tags": {
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "ceph.cluster_name": "ceph",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "ceph.crush_device_class": "",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "ceph.encrypted": "0",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "ceph.objectstore": "bluestore",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "ceph.osd_id": "1",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "ceph.type": "block",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "ceph.vdo": "0",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "ceph.with_tpm": "0"
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:            },
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:            "type": "block",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:            "vg_name": "ceph_vg1"
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:        }
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:    ],
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:    "2": [
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:        {
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:            "devices": [
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "/dev/loop5"
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:            ],
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:            "lv_name": "ceph_lv2",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:            "lv_size": "21470642176",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:            "name": "ceph_lv2",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:            "tags": {
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "ceph.cluster_name": "ceph",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "ceph.crush_device_class": "",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "ceph.encrypted": "0",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "ceph.objectstore": "bluestore",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "ceph.osd_id": "2",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "ceph.type": "block",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "ceph.vdo": "0",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:                "ceph.with_tpm": "0"
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:            },
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:            "type": "block",
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:            "vg_name": "ceph_vg2"
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:        }
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]:    ]
Jan 31 05:11:27 np0005603787 elastic_torvalds[211704]: }
Jan 31 05:11:27 np0005603787 systemd[1]: libpod-b1b1c41f7d3cdf62360885388cc3e897524057960235ac530b5e1f9058dcdc4a.scope: Deactivated successfully.
Jan 31 05:11:27 np0005603787 podman[211688]: 2026-01-31 10:11:27.912858421 +0000 UTC m=+0.486708700 container died b1b1c41f7d3cdf62360885388cc3e897524057960235ac530b5e1f9058dcdc4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_torvalds, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:11:27 np0005603787 systemd[1]: var-lib-containers-storage-overlay-2a07ea8d4198d4edf293f4de0831b190876e18b8c0ffca96cd26fe8cad09f1a4-merged.mount: Deactivated successfully.
Jan 31 05:11:27 np0005603787 podman[211688]: 2026-01-31 10:11:27.958871792 +0000 UTC m=+0.532722071 container remove b1b1c41f7d3cdf62360885388cc3e897524057960235ac530b5e1f9058dcdc4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_torvalds, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 05:11:27 np0005603787 systemd[1]: libpod-conmon-b1b1c41f7d3cdf62360885388cc3e897524057960235ac530b5e1f9058dcdc4a.scope: Deactivated successfully.
Jan 31 05:11:28 np0005603787 python3.9[211923]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:11:28 np0005603787 podman[211945]: 2026-01-31 10:11:28.362162605 +0000 UTC m=+0.045769205 container create a88c05f159562e8785203428dad0bd1927186c06beeb1ae1eed2c4c6cc21837f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_lovelace, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 05:11:28 np0005603787 systemd[1]: Started libpod-conmon-a88c05f159562e8785203428dad0bd1927186c06beeb1ae1eed2c4c6cc21837f.scope.
Jan 31 05:11:28 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:11:28 np0005603787 podman[211945]: 2026-01-31 10:11:28.341608749 +0000 UTC m=+0.025215399 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:11:28 np0005603787 podman[211945]: 2026-01-31 10:11:28.442838391 +0000 UTC m=+0.126444981 container init a88c05f159562e8785203428dad0bd1927186c06beeb1ae1eed2c4c6cc21837f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_lovelace, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 05:11:28 np0005603787 podman[211945]: 2026-01-31 10:11:28.448014199 +0000 UTC m=+0.131620899 container start a88c05f159562e8785203428dad0bd1927186c06beeb1ae1eed2c4c6cc21837f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 05:11:28 np0005603787 youthful_lovelace[211998]: 167 167
Jan 31 05:11:28 np0005603787 systemd[1]: libpod-a88c05f159562e8785203428dad0bd1927186c06beeb1ae1eed2c4c6cc21837f.scope: Deactivated successfully.
Jan 31 05:11:28 np0005603787 podman[211945]: 2026-01-31 10:11:28.452719653 +0000 UTC m=+0.136326213 container attach a88c05f159562e8785203428dad0bd1927186c06beeb1ae1eed2c4c6cc21837f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_lovelace, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:11:28 np0005603787 podman[211945]: 2026-01-31 10:11:28.453595478 +0000 UTC m=+0.137202058 container died a88c05f159562e8785203428dad0bd1927186c06beeb1ae1eed2c4c6cc21837f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_lovelace, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:11:28 np0005603787 systemd[1]: var-lib-containers-storage-overlay-ebc64f277bc4891314fcd9001dc14fa7fa962eb46cfffda1e39c385f68ffb0f7-merged.mount: Deactivated successfully.
Jan 31 05:11:28 np0005603787 podman[211945]: 2026-01-31 10:11:28.490022375 +0000 UTC m=+0.173628945 container remove a88c05f159562e8785203428dad0bd1927186c06beeb1ae1eed2c4c6cc21837f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 05:11:28 np0005603787 systemd[1]: libpod-conmon-a88c05f159562e8785203428dad0bd1927186c06beeb1ae1eed2c4c6cc21837f.scope: Deactivated successfully.
Jan 31 05:11:28 np0005603787 podman[212098]: 2026-01-31 10:11:28.612695489 +0000 UTC m=+0.048002989 container create 431739c35e54ced154b3c4e7a291b3a530ed8556f195b57ed65e41e7c855c360 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:11:28 np0005603787 systemd[1]: Started libpod-conmon-431739c35e54ced154b3c4e7a291b3a530ed8556f195b57ed65e41e7c855c360.scope.
Jan 31 05:11:28 np0005603787 podman[212098]: 2026-01-31 10:11:28.587645745 +0000 UTC m=+0.022953235 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:11:28 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:11:28 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93cc2094b86458e93a8de7b4256d425571c21edac92bc0c40c77544ecbf4868e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:11:28 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93cc2094b86458e93a8de7b4256d425571c21edac92bc0c40c77544ecbf4868e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:11:28 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93cc2094b86458e93a8de7b4256d425571c21edac92bc0c40c77544ecbf4868e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:11:28 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93cc2094b86458e93a8de7b4256d425571c21edac92bc0c40c77544ecbf4868e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:11:28 np0005603787 podman[212098]: 2026-01-31 10:11:28.711707597 +0000 UTC m=+0.147015107 container init 431739c35e54ced154b3c4e7a291b3a530ed8556f195b57ed65e41e7c855c360 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3)
Jan 31 05:11:28 np0005603787 podman[212098]: 2026-01-31 10:11:28.718638535 +0000 UTC m=+0.153946035 container start 431739c35e54ced154b3c4e7a291b3a530ed8556f195b57ed65e41e7c855c360 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_diffie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030)
Jan 31 05:11:28 np0005603787 podman[212098]: 2026-01-31 10:11:28.72232243 +0000 UTC m=+0.157629920 container attach 431739c35e54ced154b3c4e7a291b3a530ed8556f195b57ed65e41e7c855c360 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 05:11:28 np0005603787 python3.9[212151]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:11:29 np0005603787 lvm[212305]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:11:29 np0005603787 lvm[212304]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:11:29 np0005603787 lvm[212305]: VG ceph_vg1 finished
Jan 31 05:11:29 np0005603787 lvm[212304]: VG ceph_vg0 finished
Jan 31 05:11:29 np0005603787 lvm[212310]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:11:29 np0005603787 lvm[212310]: VG ceph_vg2 finished
Jan 31 05:11:29 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:11:29 np0005603787 peaceful_diffie[212118]: {}
Jan 31 05:11:29 np0005603787 systemd[1]: libpod-431739c35e54ced154b3c4e7a291b3a530ed8556f195b57ed65e41e7c855c360.scope: Deactivated successfully.
Jan 31 05:11:29 np0005603787 podman[212098]: 2026-01-31 10:11:29.42787217 +0000 UTC m=+0.863179680 container died 431739c35e54ced154b3c4e7a291b3a530ed8556f195b57ed65e41e7c855c360 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_diffie, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:11:29 np0005603787 systemd[1]: var-lib-containers-storage-overlay-93cc2094b86458e93a8de7b4256d425571c21edac92bc0c40c77544ecbf4868e-merged.mount: Deactivated successfully.
Jan 31 05:11:29 np0005603787 podman[212098]: 2026-01-31 10:11:29.476925607 +0000 UTC m=+0.912233087 container remove 431739c35e54ced154b3c4e7a291b3a530ed8556f195b57ed65e41e7c855c360 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_diffie, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:11:29 np0005603787 systemd[1]: libpod-conmon-431739c35e54ced154b3c4e7a291b3a530ed8556f195b57ed65e41e7c855c360.scope: Deactivated successfully.
Jan 31 05:11:29 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:11:29 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:11:29 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:11:29 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:11:29 np0005603787 python3.9[212399]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:11:30 np0005603787 python3.9[212576]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:11:30 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:11:30 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:11:30 np0005603787 python3.9[212729]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:11:31 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:11:31 np0005603787 python3.9[212883]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:11:31 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Jan 31 05:11:31 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:11:31.547714) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 05:11:31 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Jan 31 05:11:31 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769854291547743, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2044, "num_deletes": 251, "total_data_size": 3578248, "memory_usage": 3642232, "flush_reason": "Manual Compaction"}
Jan 31 05:11:31 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Jan 31 05:11:31 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769854291561484, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3491018, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9782, "largest_seqno": 11825, "table_properties": {"data_size": 3481721, "index_size": 5919, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17868, "raw_average_key_size": 19, "raw_value_size": 3463313, "raw_average_value_size": 3772, "num_data_blocks": 268, "num_entries": 918, "num_filter_entries": 918, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769854062, "oldest_key_time": 1769854062, "file_creation_time": 1769854291, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a9067dea-12fb-43c2-8d5c-dbf66227f0e8", "db_session_id": "EXKALWXQ4I64EWVUMKE5", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:11:31 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 13935 microseconds, and 5687 cpu microseconds.
Jan 31 05:11:31 np0005603787 ceph-mon[75160]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 05:11:31 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:11:31.561640) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3491018 bytes OK
Jan 31 05:11:31 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:11:31.561716) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Jan 31 05:11:31 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:11:31.563547) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Jan 31 05:11:31 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:11:31.563617) EVENT_LOG_v1 {"time_micros": 1769854291563604, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 05:11:31 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:11:31.563656) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 05:11:31 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3569713, prev total WAL file size 3569713, number of live WAL files 2.
Jan 31 05:11:31 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:11:31 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:11:31.564898) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Jan 31 05:11:31 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 05:11:31 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3409KB)], [26(6154KB)]
Jan 31 05:11:31 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769854291564998, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9793590, "oldest_snapshot_seqno": -1}
Jan 31 05:11:31 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3755 keys, 8179075 bytes, temperature: kUnknown
Jan 31 05:11:31 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769854291613281, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8179075, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8150159, "index_size": 18438, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9413, "raw_key_size": 90140, "raw_average_key_size": 24, "raw_value_size": 8078607, "raw_average_value_size": 2151, "num_data_blocks": 798, "num_entries": 3755, "num_filter_entries": 3755, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769853439, "oldest_key_time": 0, "file_creation_time": 1769854291, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a9067dea-12fb-43c2-8d5c-dbf66227f0e8", "db_session_id": "EXKALWXQ4I64EWVUMKE5", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:11:31 np0005603787 ceph-mon[75160]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 05:11:31 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:11:31.613516) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8179075 bytes
Jan 31 05:11:31 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:11:31.614819) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 202.6 rd, 169.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 6.0 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(5.1) write-amplify(2.3) OK, records in: 4269, records dropped: 514 output_compression: NoCompression
Jan 31 05:11:31 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:11:31.614840) EVENT_LOG_v1 {"time_micros": 1769854291614828, "job": 10, "event": "compaction_finished", "compaction_time_micros": 48348, "compaction_time_cpu_micros": 22867, "output_level": 6, "num_output_files": 1, "total_output_size": 8179075, "num_input_records": 4269, "num_output_records": 3755, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 05:11:31 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:11:31 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769854291615300, "job": 10, "event": "table_file_deletion", "file_number": 28}
Jan 31 05:11:31 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:11:31 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769854291615873, "job": 10, "event": "table_file_deletion", "file_number": 26}
Jan 31 05:11:31 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:11:31.564743) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:11:31 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:11:31.615948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:11:31 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:11:31.615957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:11:31 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:11:31.615959) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:11:31 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:11:31.615962) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:11:31 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:11:31.615964) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:11:32 np0005603787 python3.9[213038]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:11:32 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:11:32 np0005603787 python3.9[213190]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:11:33 np0005603787 python3.9[213313]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769854292.3243237-1391-68989203238592/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:11:33 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:11:33 np0005603787 python3.9[213465]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:11:34 np0005603787 python3.9[213588]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769854293.510358-1406-197814825989515/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:11:35 np0005603787 python3.9[213740]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:11:35 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:11:35 np0005603787 python3.9[213863]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769854294.5757337-1421-178886276556390/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:11:36 np0005603787 python3.9[214015]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:11:36 np0005603787 systemd[1]: Reloading.
Jan 31 05:11:36 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:11:36 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:11:36 np0005603787 systemd[1]: Reached target edpm_libvirt.target.
Jan 31 05:11:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:11:37.049 154765 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:11:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:11:37.050 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:11:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:11:37.051 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:11:37 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:11:37 np0005603787 python3.9[214205]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 31 05:11:37 np0005603787 systemd[1]: Reloading.
Jan 31 05:11:37 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:11:37 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:11:37 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:11:37 np0005603787 systemd[1]: Reloading.
Jan 31 05:11:37 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:11:37 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:11:38 np0005603787 systemd[1]: session-49.scope: Deactivated successfully.
Jan 31 05:11:38 np0005603787 systemd[1]: session-49.scope: Consumed 2min 53.728s CPU time.
Jan 31 05:11:38 np0005603787 systemd-logind[786]: Session 49 logged out. Waiting for processes to exit.
Jan 31 05:11:38 np0005603787 systemd-logind[786]: Removed session 49.
Jan 31 05:11:38 np0005603787 podman[214303]: 2026-01-31 10:11:38.691475572 +0000 UTC m=+0.053537715 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 05:11:39 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:11:39 np0005603787 podman[214321]: 2026-01-31 10:11:39.91074843 +0000 UTC m=+0.129526700 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 05:11:41 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:11:42 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:11:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_10:11:43
Jan 31 05:11:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:11:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] do_upmap
Jan 31 05:11:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', '.mgr', 'images', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log']
Jan 31 05:11:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:11:43 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:11:43 np0005603787 systemd-logind[786]: New session 50 of user zuul.
Jan 31 05:11:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:11:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:11:43 np0005603787 systemd[1]: Started Session 50 of User zuul.
Jan 31 05:11:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:11:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:11:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:11:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:11:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:11:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:11:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:11:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:11:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:11:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:11:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:11:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:11:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:11:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:11:44 np0005603787 python3.9[214501]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:11:45 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:11:45 np0005603787 python3.9[214655]: ansible-ansible.builtin.service_facts Invoked
Jan 31 05:11:45 np0005603787 network[214672]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 05:11:45 np0005603787 network[214673]: 'network-scripts' will be removed from distribution in near future.
Jan 31 05:11:45 np0005603787 network[214674]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 05:11:47 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:11:47 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:11:48 np0005603787 python3.9[214946]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 05:11:49 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:11:49 np0005603787 python3.9[215030]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 05:11:51 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:11:52 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:11:53 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:11:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:11:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:11:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:11:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:11:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:11:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:11:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:11:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:11:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:11:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:11:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:11:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:11:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1786947556520692e-06 of space, bias 4.0, pg target 0.0014144337067824831 quantized to 16 (current 16)
Jan 31 05:11:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:11:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:11:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:11:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:11:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:11:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 05:11:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:11:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:11:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:11:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:11:55 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:11:55 np0005603787 python3.9[215183]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:11:56 np0005603787 python3.9[215335]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:11:57 np0005603787 python3.9[215488]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:11:57 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:11:57 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:11:57 np0005603787 python3.9[215640]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:11:58 np0005603787 python3.9[215793]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:11:59 np0005603787 python3.9[215916]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769854318.0739894-90-43973295215082/.source.iscsi _original_basename=.8rzqwxgg follow=False checksum=16701a817e421c2febfde9e6913051c735ba92c8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:11:59 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:11:59 np0005603787 python3.9[216068]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:12:00 np0005603787 python3.9[216220]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:12:01 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:12:01 np0005603787 python3.9[216372]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:12:01 np0005603787 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 31 05:12:02 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:12:02 np0005603787 python3.9[216528]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:12:02 np0005603787 systemd[1]: Reloading.
Jan 31 05:12:02 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:12:02 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:12:03 np0005603787 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 31 05:12:03 np0005603787 systemd[1]: Starting Open-iSCSI...
Jan 31 05:12:03 np0005603787 kernel: Loading iSCSI transport class v2.0-870.
Jan 31 05:12:03 np0005603787 systemd[1]: Started Open-iSCSI.
Jan 31 05:12:03 np0005603787 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 31 05:12:03 np0005603787 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 31 05:12:03 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:12:03 np0005603787 python3.9[216728]: ansible-ansible.builtin.service_facts Invoked
Jan 31 05:12:03 np0005603787 network[216745]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 05:12:03 np0005603787 network[216746]: 'network-scripts' will be removed from distribution in near future.
Jan 31 05:12:03 np0005603787 network[216747]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 05:12:05 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:12:07 np0005603787 python3.9[217019]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 05:12:07 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:12:07 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:12:08 np0005603787 podman[217024]: 2026-01-31 10:12:08.839883504 +0000 UTC m=+0.061375760 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 05:12:09 np0005603787 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 05:12:09 np0005603787 systemd[1]: Starting man-db-cache-update.service...
Jan 31 05:12:09 np0005603787 systemd[1]: Reloading.
Jan 31 05:12:09 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:12:09 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:12:09 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:12:09 np0005603787 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 05:12:09 np0005603787 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 05:12:09 np0005603787 systemd[1]: Finished man-db-cache-update.service.
Jan 31 05:12:09 np0005603787 systemd[1]: run-rc0b3c36e2d8d40389a27cb82713ac43d.service: Deactivated successfully.
Jan 31 05:12:10 np0005603787 podman[217325]: 2026-01-31 10:12:10.441139233 +0000 UTC m=+0.100707023 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 31 05:12:10 np0005603787 python3.9[217371]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 31 05:12:11 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:12:11 np0005603787 python3.9[217530]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 31 05:12:12 np0005603787 python3.9[217686]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:12:12 np0005603787 python3.9[217809]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769854331.6318598-178-8316596303379/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:12:12 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:12:13 np0005603787 python3.9[217961]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:12:13 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:12:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:12:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:12:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:12:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:12:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:12:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:12:14 np0005603787 python3.9[218113]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 05:12:14 np0005603787 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 31 05:12:14 np0005603787 systemd[1]: Stopped Load Kernel Modules.
Jan 31 05:12:14 np0005603787 systemd[1]: Stopping Load Kernel Modules...
Jan 31 05:12:14 np0005603787 systemd[1]: Starting Load Kernel Modules...
Jan 31 05:12:14 np0005603787 systemd[1]: Finished Load Kernel Modules.
Jan 31 05:12:14 np0005603787 python3.9[218269]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:12:15 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:12:15 np0005603787 python3.9[218422]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:12:16 np0005603787 python3.9[218574]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:12:16 np0005603787 python3.9[218697]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769854335.839572-229-19122751678142/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:12:17 np0005603787 python3.9[218849]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:12:17 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:12:17 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:12:17 np0005603787 python3.9[219002]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:12:18 np0005603787 python3.9[219154]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:12:19 np0005603787 python3.9[219306]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:12:19 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:12:19 np0005603787 python3.9[219458]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:12:20 np0005603787 python3.9[219610]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:12:21 np0005603787 python3.9[219762]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:12:21 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:12:21 np0005603787 python3.9[219914]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:12:22 np0005603787 python3.9[220066]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:12:22 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:12:22 np0005603787 python3.9[220220]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:12:23 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:12:23 np0005603787 python3.9[220373]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:12:23 np0005603787 systemd[1]: Listening on multipathd control socket.
Jan 31 05:12:24 np0005603787 python3.9[220529]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:12:24 np0005603787 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 31 05:12:24 np0005603787 udevadm[220534]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 31 05:12:24 np0005603787 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 31 05:12:24 np0005603787 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 31 05:12:24 np0005603787 multipathd[220537]: --------start up--------
Jan 31 05:12:24 np0005603787 multipathd[220537]: read /etc/multipath.conf
Jan 31 05:12:24 np0005603787 multipathd[220537]: path checkers start up
Jan 31 05:12:24 np0005603787 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 31 05:12:25 np0005603787 python3.9[220696]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 31 05:12:25 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:12:25 np0005603787 python3.9[220848]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 31 05:12:25 np0005603787 kernel: Key type psk registered
Jan 31 05:12:26 np0005603787 python3.9[221009]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:12:27 np0005603787 python3.9[221132]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769854346.1587112-359-36303937726519/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:12:27 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:12:27 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:12:27 np0005603787 python3.9[221284]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:12:28 np0005603787 python3.9[221436]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 05:12:28 np0005603787 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 31 05:12:28 np0005603787 systemd[1]: Stopped Load Kernel Modules.
Jan 31 05:12:28 np0005603787 systemd[1]: Stopping Load Kernel Modules...
Jan 31 05:12:28 np0005603787 systemd[1]: Starting Load Kernel Modules...
Jan 31 05:12:28 np0005603787 systemd[1]: Finished Load Kernel Modules.
Jan 31 05:12:29 np0005603787 python3.9[221592]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 05:12:29 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:12:30 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:12:30 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:12:30 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:12:30 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:12:30 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:12:30 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:12:30 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:12:30 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:12:30 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:12:30 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:12:30 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:12:30 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:12:30 np0005603787 podman[221740]: 2026-01-31 10:12:30.572793386 +0000 UTC m=+0.054054748 container create 7619248e1a316b8a655bf6be09e43c132bb5f9555440da1e29d7f365194c50f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 05:12:30 np0005603787 systemd[1]: Started libpod-conmon-7619248e1a316b8a655bf6be09e43c132bb5f9555440da1e29d7f365194c50f7.scope.
Jan 31 05:12:30 np0005603787 podman[221740]: 2026-01-31 10:12:30.54895263 +0000 UTC m=+0.030214012 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:12:30 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:12:30 np0005603787 podman[221740]: 2026-01-31 10:12:30.660529571 +0000 UTC m=+0.141790903 container init 7619248e1a316b8a655bf6be09e43c132bb5f9555440da1e29d7f365194c50f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_shaw, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 05:12:30 np0005603787 podman[221740]: 2026-01-31 10:12:30.665383284 +0000 UTC m=+0.146644646 container start 7619248e1a316b8a655bf6be09e43c132bb5f9555440da1e29d7f365194c50f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 05:12:30 np0005603787 podman[221740]: 2026-01-31 10:12:30.669287712 +0000 UTC m=+0.150549064 container attach 7619248e1a316b8a655bf6be09e43c132bb5f9555440da1e29d7f365194c50f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_shaw, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True)
Jan 31 05:12:30 np0005603787 nostalgic_shaw[221756]: 167 167
Jan 31 05:12:30 np0005603787 systemd[1]: libpod-7619248e1a316b8a655bf6be09e43c132bb5f9555440da1e29d7f365194c50f7.scope: Deactivated successfully.
Jan 31 05:12:30 np0005603787 conmon[221756]: conmon 7619248e1a316b8a655b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7619248e1a316b8a655bf6be09e43c132bb5f9555440da1e29d7f365194c50f7.scope/container/memory.events
Jan 31 05:12:30 np0005603787 podman[221740]: 2026-01-31 10:12:30.671521034 +0000 UTC m=+0.152782356 container died 7619248e1a316b8a655bf6be09e43c132bb5f9555440da1e29d7f365194c50f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 05:12:30 np0005603787 systemd[1]: var-lib-containers-storage-overlay-7d6c03302e84acdb50804df8733b3d547db7588e1bd94899a038852f773d8401-merged.mount: Deactivated successfully.
Jan 31 05:12:30 np0005603787 podman[221740]: 2026-01-31 10:12:30.706729783 +0000 UTC m=+0.187991115 container remove 7619248e1a316b8a655bf6be09e43c132bb5f9555440da1e29d7f365194c50f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:12:30 np0005603787 systemd[1]: libpod-conmon-7619248e1a316b8a655bf6be09e43c132bb5f9555440da1e29d7f365194c50f7.scope: Deactivated successfully.
Jan 31 05:12:30 np0005603787 podman[221780]: 2026-01-31 10:12:30.84832616 +0000 UTC m=+0.035679823 container create 7cb69db3cfba69e881faaa60e2ca937f568844126755974123d7329300a9e54a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_kirch, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:12:30 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:12:30 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:12:30 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:12:30 np0005603787 systemd[1]: Started libpod-conmon-7cb69db3cfba69e881faaa60e2ca937f568844126755974123d7329300a9e54a.scope.
Jan 31 05:12:30 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:12:30 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff43d26ee7c4c2822b37c1aa126275978ab4d1759ab943e031565b647793a540/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:12:30 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff43d26ee7c4c2822b37c1aa126275978ab4d1759ab943e031565b647793a540/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:12:30 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff43d26ee7c4c2822b37c1aa126275978ab4d1759ab943e031565b647793a540/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:12:30 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff43d26ee7c4c2822b37c1aa126275978ab4d1759ab943e031565b647793a540/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:12:30 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff43d26ee7c4c2822b37c1aa126275978ab4d1759ab943e031565b647793a540/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:12:30 np0005603787 podman[221780]: 2026-01-31 10:12:30.831928758 +0000 UTC m=+0.019282421 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:12:30 np0005603787 podman[221780]: 2026-01-31 10:12:30.93012916 +0000 UTC m=+0.117482843 container init 7cb69db3cfba69e881faaa60e2ca937f568844126755974123d7329300a9e54a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:12:30 np0005603787 podman[221780]: 2026-01-31 10:12:30.942875362 +0000 UTC m=+0.130229025 container start 7cb69db3cfba69e881faaa60e2ca937f568844126755974123d7329300a9e54a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_kirch, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 05:12:30 np0005603787 podman[221780]: 2026-01-31 10:12:30.947943331 +0000 UTC m=+0.135296994 container attach 7cb69db3cfba69e881faaa60e2ca937f568844126755974123d7329300a9e54a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 05:12:31 np0005603787 systemd[1]: Reloading.
Jan 31 05:12:31 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:12:31 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:12:31 np0005603787 charming_kirch[221796]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:12:31 np0005603787 charming_kirch[221796]: --> All data devices are unavailable
Jan 31 05:12:31 np0005603787 podman[221780]: 2026-01-31 10:12:31.366948511 +0000 UTC m=+0.554302174 container died 7cb69db3cfba69e881faaa60e2ca937f568844126755974123d7329300a9e54a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_kirch, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:12:31 np0005603787 systemd[1]: libpod-7cb69db3cfba69e881faaa60e2ca937f568844126755974123d7329300a9e54a.scope: Deactivated successfully.
Jan 31 05:12:31 np0005603787 systemd[1]: Reloading.
Jan 31 05:12:31 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:12:31 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:12:31 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:12:31 np0005603787 systemd[1]: var-lib-containers-storage-overlay-ff43d26ee7c4c2822b37c1aa126275978ab4d1759ab943e031565b647793a540-merged.mount: Deactivated successfully.
Jan 31 05:12:31 np0005603787 podman[221780]: 2026-01-31 10:12:31.659326358 +0000 UTC m=+0.846680021 container remove 7cb69db3cfba69e881faaa60e2ca937f568844126755974123d7329300a9e54a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_kirch, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:12:31 np0005603787 systemd[1]: libpod-conmon-7cb69db3cfba69e881faaa60e2ca937f568844126755974123d7329300a9e54a.scope: Deactivated successfully.
Jan 31 05:12:31 np0005603787 systemd-logind[786]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 31 05:12:31 np0005603787 lvm[221960]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:12:31 np0005603787 lvm[221960]: VG ceph_vg2 finished
Jan 31 05:12:31 np0005603787 systemd-logind[786]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 31 05:12:31 np0005603787 lvm[221958]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:12:31 np0005603787 lvm[221958]: VG ceph_vg1 finished
Jan 31 05:12:31 np0005603787 lvm[221959]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:12:31 np0005603787 lvm[221959]: VG ceph_vg0 finished
Jan 31 05:12:31 np0005603787 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 05:12:31 np0005603787 systemd[1]: Starting man-db-cache-update.service...
Jan 31 05:12:31 np0005603787 systemd[1]: Reloading.
Jan 31 05:12:32 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:12:32 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:12:32 np0005603787 podman[222062]: 2026-01-31 10:12:32.068968642 +0000 UTC m=+0.040229918 container create a1686349d1023e22787377cb40f2ed030fe1b95a51b9fd92c9ea47a163bb4360 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:12:32 np0005603787 podman[222062]: 2026-01-31 10:12:32.054736861 +0000 UTC m=+0.025998167 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:12:32 np0005603787 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 05:12:32 np0005603787 systemd[1]: Started libpod-conmon-a1686349d1023e22787377cb40f2ed030fe1b95a51b9fd92c9ea47a163bb4360.scope.
Jan 31 05:12:32 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:12:32 np0005603787 podman[222062]: 2026-01-31 10:12:32.245713116 +0000 UTC m=+0.216974412 container init a1686349d1023e22787377cb40f2ed030fe1b95a51b9fd92c9ea47a163bb4360 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_sutherland, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:12:32 np0005603787 podman[222062]: 2026-01-31 10:12:32.250680373 +0000 UTC m=+0.221941649 container start a1686349d1023e22787377cb40f2ed030fe1b95a51b9fd92c9ea47a163bb4360 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 05:12:32 np0005603787 podman[222062]: 2026-01-31 10:12:32.254311543 +0000 UTC m=+0.225572819 container attach a1686349d1023e22787377cb40f2ed030fe1b95a51b9fd92c9ea47a163bb4360 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 05:12:32 np0005603787 systemd[1]: libpod-a1686349d1023e22787377cb40f2ed030fe1b95a51b9fd92c9ea47a163bb4360.scope: Deactivated successfully.
Jan 31 05:12:32 np0005603787 gallant_sutherland[222308]: 167 167
Jan 31 05:12:32 np0005603787 conmon[222308]: conmon a1686349d1023e227873 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a1686349d1023e22787377cb40f2ed030fe1b95a51b9fd92c9ea47a163bb4360.scope/container/memory.events
Jan 31 05:12:32 np0005603787 podman[222062]: 2026-01-31 10:12:32.256018299 +0000 UTC m=+0.227279575 container died a1686349d1023e22787377cb40f2ed030fe1b95a51b9fd92c9ea47a163bb4360 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 05:12:32 np0005603787 systemd[1]: var-lib-containers-storage-overlay-5cc1140bdc04563c5d8772ed95d703d90943f5a52b46eeef0ad0b2b9d71ea603-merged.mount: Deactivated successfully.
Jan 31 05:12:32 np0005603787 podman[222062]: 2026-01-31 10:12:32.293662715 +0000 UTC m=+0.264923991 container remove a1686349d1023e22787377cb40f2ed030fe1b95a51b9fd92c9ea47a163bb4360 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 31 05:12:32 np0005603787 systemd[1]: libpod-conmon-a1686349d1023e22787377cb40f2ed030fe1b95a51b9fd92c9ea47a163bb4360.scope: Deactivated successfully.
Jan 31 05:12:32 np0005603787 podman[222629]: 2026-01-31 10:12:32.405237997 +0000 UTC m=+0.031240841 container create 610f4d66ae8825164cb8670c86af15a57d329b2f2b5cf1d5631360de99e4e555 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 05:12:32 np0005603787 systemd[1]: Started libpod-conmon-610f4d66ae8825164cb8670c86af15a57d329b2f2b5cf1d5631360de99e4e555.scope.
Jan 31 05:12:32 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:12:32 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed87cfccdf7700484843f1c68e2f7d2a97328e711a55bc7f95f2232262218c66/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:12:32 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed87cfccdf7700484843f1c68e2f7d2a97328e711a55bc7f95f2232262218c66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:12:32 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed87cfccdf7700484843f1c68e2f7d2a97328e711a55bc7f95f2232262218c66/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:12:32 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed87cfccdf7700484843f1c68e2f7d2a97328e711a55bc7f95f2232262218c66/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:12:32 np0005603787 podman[222629]: 2026-01-31 10:12:32.479956982 +0000 UTC m=+0.105959826 container init 610f4d66ae8825164cb8670c86af15a57d329b2f2b5cf1d5631360de99e4e555 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 05:12:32 np0005603787 podman[222629]: 2026-01-31 10:12:32.486235846 +0000 UTC m=+0.112238670 container start 610f4d66ae8825164cb8670c86af15a57d329b2f2b5cf1d5631360de99e4e555 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_dhawan, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 05:12:32 np0005603787 podman[222629]: 2026-01-31 10:12:32.390949764 +0000 UTC m=+0.016952608 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:12:32 np0005603787 podman[222629]: 2026-01-31 10:12:32.489746252 +0000 UTC m=+0.115749076 container attach 610f4d66ae8825164cb8670c86af15a57d329b2f2b5cf1d5631360de99e4e555 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 05:12:32 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]: {
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:    "0": [
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:        {
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:            "devices": [
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "/dev/loop3"
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:            ],
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:            "lv_name": "ceph_lv0",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:            "lv_size": "21470642176",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:            "name": "ceph_lv0",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:            "tags": {
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "ceph.cluster_name": "ceph",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "ceph.crush_device_class": "",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "ceph.encrypted": "0",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "ceph.objectstore": "bluestore",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "ceph.osd_id": "0",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "ceph.type": "block",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "ceph.vdo": "0",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "ceph.with_tpm": "0"
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:            },
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:            "type": "block",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:            "vg_name": "ceph_vg0"
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:        }
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:    ],
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:    "1": [
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:        {
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:            "devices": [
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "/dev/loop4"
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:            ],
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:            "lv_name": "ceph_lv1",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:            "lv_size": "21470642176",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:            "name": "ceph_lv1",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:            "tags": {
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "ceph.cluster_name": "ceph",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "ceph.crush_device_class": "",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "ceph.encrypted": "0",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "ceph.objectstore": "bluestore",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "ceph.osd_id": "1",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "ceph.type": "block",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "ceph.vdo": "0",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "ceph.with_tpm": "0"
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:            },
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:            "type": "block",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:            "vg_name": "ceph_vg1"
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:        }
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:    ],
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:    "2": [
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:        {
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:            "devices": [
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "/dev/loop5"
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:            ],
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:            "lv_name": "ceph_lv2",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:            "lv_size": "21470642176",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:            "name": "ceph_lv2",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:            "tags": {
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "ceph.cluster_name": "ceph",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "ceph.crush_device_class": "",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "ceph.encrypted": "0",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "ceph.objectstore": "bluestore",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "ceph.osd_id": "2",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "ceph.type": "block",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "ceph.vdo": "0",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:                "ceph.with_tpm": "0"
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:            },
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:            "type": "block",
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:            "vg_name": "ceph_vg2"
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:        }
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]:    ]
Jan 31 05:12:32 np0005603787 wonderful_dhawan[222792]: }
Jan 31 05:12:32 np0005603787 systemd[1]: libpod-610f4d66ae8825164cb8670c86af15a57d329b2f2b5cf1d5631360de99e4e555.scope: Deactivated successfully.
Jan 31 05:12:32 np0005603787 podman[222629]: 2026-01-31 10:12:32.758876729 +0000 UTC m=+0.384879553 container died 610f4d66ae8825164cb8670c86af15a57d329b2f2b5cf1d5631360de99e4e555 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_dhawan, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:12:32 np0005603787 systemd[1]: var-lib-containers-storage-overlay-ed87cfccdf7700484843f1c68e2f7d2a97328e711a55bc7f95f2232262218c66-merged.mount: Deactivated successfully.
Jan 31 05:12:32 np0005603787 podman[222629]: 2026-01-31 10:12:32.829675617 +0000 UTC m=+0.455678441 container remove 610f4d66ae8825164cb8670c86af15a57d329b2f2b5cf1d5631360de99e4e555 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 05:12:32 np0005603787 systemd[1]: libpod-conmon-610f4d66ae8825164cb8670c86af15a57d329b2f2b5cf1d5631360de99e4e555.scope: Deactivated successfully.
Jan 31 05:12:33 np0005603787 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 05:12:33 np0005603787 systemd[1]: Finished man-db-cache-update.service.
Jan 31 05:12:33 np0005603787 systemd[1]: man-db-cache-update.service: Consumed 1.026s CPU time.
Jan 31 05:12:33 np0005603787 systemd[1]: run-rc1a0adfa60f24c18951846d038750f36.service: Deactivated successfully.
Jan 31 05:12:33 np0005603787 podman[223495]: 2026-01-31 10:12:33.270659283 +0000 UTC m=+0.052760853 container create 57887f25336f6378fe6549828eab862d3c193c837d059d95488efc8d405904f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_napier, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:12:33 np0005603787 systemd[1]: Started libpod-conmon-57887f25336f6378fe6549828eab862d3c193c837d059d95488efc8d405904f3.scope.
Jan 31 05:12:33 np0005603787 podman[223495]: 2026-01-31 10:12:33.242876839 +0000 UTC m=+0.024978369 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:12:33 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:12:33 np0005603787 podman[223495]: 2026-01-31 10:12:33.355009635 +0000 UTC m=+0.137111255 container init 57887f25336f6378fe6549828eab862d3c193c837d059d95488efc8d405904f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_napier, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 05:12:33 np0005603787 python3.9[223482]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 05:12:33 np0005603787 podman[223495]: 2026-01-31 10:12:33.363714985 +0000 UTC m=+0.145816515 container start 57887f25336f6378fe6549828eab862d3c193c837d059d95488efc8d405904f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_napier, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 05:12:33 np0005603787 podman[223495]: 2026-01-31 10:12:33.36755893 +0000 UTC m=+0.149660550 container attach 57887f25336f6378fe6549828eab862d3c193c837d059d95488efc8d405904f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 05:12:33 np0005603787 determined_napier[223510]: 167 167
Jan 31 05:12:33 np0005603787 podman[223495]: 2026-01-31 10:12:33.369017 +0000 UTC m=+0.151118540 container died 57887f25336f6378fe6549828eab862d3c193c837d059d95488efc8d405904f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_napier, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 05:12:33 np0005603787 systemd[1]: libpod-57887f25336f6378fe6549828eab862d3c193c837d059d95488efc8d405904f3.scope: Deactivated successfully.
Jan 31 05:12:33 np0005603787 iscsid[216567]: iscsid shutting down.
Jan 31 05:12:33 np0005603787 systemd[1]: Stopping Open-iSCSI...
Jan 31 05:12:33 np0005603787 systemd[1]: iscsid.service: Deactivated successfully.
Jan 31 05:12:33 np0005603787 systemd[1]: Stopped Open-iSCSI.
Jan 31 05:12:33 np0005603787 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 31 05:12:33 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:12:33 np0005603787 podman[223495]: 2026-01-31 10:12:33.412187488 +0000 UTC m=+0.194289018 container remove 57887f25336f6378fe6549828eab862d3c193c837d059d95488efc8d405904f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_napier, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:12:33 np0005603787 systemd[1]: Starting Open-iSCSI...
Jan 31 05:12:33 np0005603787 systemd[1]: var-lib-containers-storage-overlay-92a13f909c59169849c757ee5a841e8d0490bec95677710b94feafae77e3f684-merged.mount: Deactivated successfully.
Jan 31 05:12:33 np0005603787 systemd[1]: Started Open-iSCSI.
Jan 31 05:12:33 np0005603787 systemd[1]: libpod-conmon-57887f25336f6378fe6549828eab862d3c193c837d059d95488efc8d405904f3.scope: Deactivated successfully.
Jan 31 05:12:33 np0005603787 podman[223550]: 2026-01-31 10:12:33.574872085 +0000 UTC m=+0.051639962 container create 5cf35c4a330f73a03ec5090333763a47875cdd79aa9dccaf201db4466c2106d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_mirzakhani, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:12:33 np0005603787 systemd[1]: Started libpod-conmon-5cf35c4a330f73a03ec5090333763a47875cdd79aa9dccaf201db4466c2106d1.scope.
Jan 31 05:12:33 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:12:33 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52d3fe674ed2b5df0150f6b0ba139187ee4c769d5738e504d0c10dddb21c8802/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:12:33 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52d3fe674ed2b5df0150f6b0ba139187ee4c769d5738e504d0c10dddb21c8802/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:12:33 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52d3fe674ed2b5df0150f6b0ba139187ee4c769d5738e504d0c10dddb21c8802/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:12:33 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52d3fe674ed2b5df0150f6b0ba139187ee4c769d5738e504d0c10dddb21c8802/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:12:33 np0005603787 podman[223550]: 2026-01-31 10:12:33.557130578 +0000 UTC m=+0.033898475 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:12:33 np0005603787 podman[223550]: 2026-01-31 10:12:33.658725193 +0000 UTC m=+0.135493090 container init 5cf35c4a330f73a03ec5090333763a47875cdd79aa9dccaf201db4466c2106d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_mirzakhani, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 05:12:33 np0005603787 podman[223550]: 2026-01-31 10:12:33.666637441 +0000 UTC m=+0.143405308 container start 5cf35c4a330f73a03ec5090333763a47875cdd79aa9dccaf201db4466c2106d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:12:33 np0005603787 podman[223550]: 2026-01-31 10:12:33.669457429 +0000 UTC m=+0.146225296 container attach 5cf35c4a330f73a03ec5090333763a47875cdd79aa9dccaf201db4466c2106d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 05:12:34 np0005603787 python3.9[223720]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 05:12:34 np0005603787 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 31 05:12:34 np0005603787 multipathd[220537]: exit (signal)
Jan 31 05:12:34 np0005603787 multipathd[220537]: --------shut down-------
Jan 31 05:12:34 np0005603787 systemd[1]: multipathd.service: Deactivated successfully.
Jan 31 05:12:34 np0005603787 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 31 05:12:34 np0005603787 lvm[223784]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:12:34 np0005603787 lvm[223784]: VG ceph_vg0 finished
Jan 31 05:12:34 np0005603787 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 31 05:12:34 np0005603787 multipathd[223787]: --------start up--------
Jan 31 05:12:34 np0005603787 multipathd[223787]: read /etc/multipath.conf
Jan 31 05:12:34 np0005603787 multipathd[223787]: path checkers start up
Jan 31 05:12:34 np0005603787 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 31 05:12:34 np0005603787 lvm[223791]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:12:34 np0005603787 lvm[223791]: VG ceph_vg1 finished
Jan 31 05:12:34 np0005603787 lvm[223798]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:12:34 np0005603787 lvm[223798]: VG ceph_vg2 finished
Jan 31 05:12:34 np0005603787 exciting_mirzakhani[223601]: {}
Jan 31 05:12:34 np0005603787 systemd[1]: libpod-5cf35c4a330f73a03ec5090333763a47875cdd79aa9dccaf201db4466c2106d1.scope: Deactivated successfully.
Jan 31 05:12:34 np0005603787 podman[223550]: 2026-01-31 10:12:34.387369547 +0000 UTC m=+0.864137434 container died 5cf35c4a330f73a03ec5090333763a47875cdd79aa9dccaf201db4466c2106d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 05:12:34 np0005603787 systemd[1]: var-lib-containers-storage-overlay-52d3fe674ed2b5df0150f6b0ba139187ee4c769d5738e504d0c10dddb21c8802-merged.mount: Deactivated successfully.
Jan 31 05:12:34 np0005603787 podman[223550]: 2026-01-31 10:12:34.47577159 +0000 UTC m=+0.952539457 container remove 5cf35c4a330f73a03ec5090333763a47875cdd79aa9dccaf201db4466c2106d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_mirzakhani, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:12:34 np0005603787 systemd[1]: libpod-conmon-5cf35c4a330f73a03ec5090333763a47875cdd79aa9dccaf201db4466c2106d1.scope: Deactivated successfully.
Jan 31 05:12:34 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:12:34 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:12:34 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:12:34 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:12:34 np0005603787 python3.9[223987]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:12:34 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:12:34 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:12:35 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:12:35 np0005603787 python3.9[224143]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:12:36 np0005603787 python3.9[224295]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 05:12:36 np0005603787 systemd[1]: Reloading.
Jan 31 05:12:36 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:12:36 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:12:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:12:37.050 154765 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:12:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:12:37.051 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:12:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:12:37.051 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:12:37 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:12:37 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:12:37 np0005603787 python3.9[224480]: ansible-ansible.builtin.service_facts Invoked
Jan 31 05:12:37 np0005603787 network[224497]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 05:12:37 np0005603787 network[224498]: 'network-scripts' will be removed from distribution in near future.
Jan 31 05:12:37 np0005603787 network[224499]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 05:12:38 np0005603787 podman[224541]: 2026-01-31 10:12:38.930858738 +0000 UTC m=+0.045044342 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 05:12:39 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:12:40 np0005603787 podman[224763]: 2026-01-31 10:12:40.66819869 +0000 UTC m=+0.072049533 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 05:12:40 np0005603787 python3.9[224812]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:12:41 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:12:41 np0005603787 python3.9[224970]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:12:42 np0005603787 python3.9[225123]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:12:42 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:12:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_10:12:43
Jan 31 05:12:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:12:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] do_upmap
Jan 31 05:12:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] pools ['vms', 'volumes', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', '.rgw.root']
Jan 31 05:12:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:12:43 np0005603787 python3.9[225276]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:12:43 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:12:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:12:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:12:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:12:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:12:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:12:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:12:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:12:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:12:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:12:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:12:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:12:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:12:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:12:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:12:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:12:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:12:43 np0005603787 python3.9[225429]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:12:44 np0005603787 python3.9[225582]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:12:45 np0005603787 python3.9[225735]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:12:45 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:12:46 np0005603787 python3.9[225888]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:12:46 np0005603787 python3.9[226041]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:12:47 np0005603787 python3.9[226193]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:12:47 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:12:47 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:12:47 np0005603787 python3.9[226345]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:12:48 np0005603787 python3.9[226497]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:12:49 np0005603787 python3.9[226649]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:12:49 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:12:49 np0005603787 python3.9[226801]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:12:50 np0005603787 python3.9[226953]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:12:50 np0005603787 python3.9[227105]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:12:51 np0005603787 python3.9[227257]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:12:51 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:12:51 np0005603787 python3.9[227409]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:12:52 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:12:52 np0005603787 python3.9[227561]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:12:53 np0005603787 python3.9[227713]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:12:53 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:12:53 np0005603787 python3.9[227865]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:12:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:12:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:12:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:12:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:12:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:12:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:12:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:12:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:12:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:12:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:12:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:12:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:12:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1786947556520692e-06 of space, bias 4.0, pg target 0.0014144337067824831 quantized to 16 (current 16)
Jan 31 05:12:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:12:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:12:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:12:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:12:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:12:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 05:12:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:12:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:12:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:12:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:12:54 np0005603787 python3.9[228017]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:12:54 np0005603787 python3.9[228169]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:12:55 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:12:55 np0005603787 python3.9[228321]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:12:56 np0005603787 python3.9[228473]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:12:56 np0005603787 python3.9[228625]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 05:12:57 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:12:57 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:12:57 np0005603787 python3.9[228777]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 05:12:57 np0005603787 systemd[1]: Reloading.
Jan 31 05:12:57 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:12:57 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:12:58 np0005603787 python3.9[228964]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:12:59 np0005603787 python3.9[229117]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:12:59 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:12:59 np0005603787 python3.9[229270]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:13:00 np0005603787 python3.9[229423]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:13:00 np0005603787 python3.9[229576]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:13:01 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:13:01 np0005603787 python3.9[229729]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:13:01 np0005603787 python3.9[229882]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:13:02 np0005603787 python3.9[230035]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:13:02 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:13:03 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:13:03 np0005603787 python3.9[230188]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:13:04 np0005603787 python3.9[230340]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:13:05 np0005603787 python3.9[230492]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:13:05 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:13:05 np0005603787 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 31 05:13:05 np0005603787 python3.9[230644]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:13:06 np0005603787 python3.9[230797]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:13:06 np0005603787 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 31 05:13:06 np0005603787 python3.9[230949]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:13:07 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:13:07 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:13:07 np0005603787 python3.9[231102]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:13:08 np0005603787 python3.9[231254]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:13:08 np0005603787 python3.9[231406]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:13:09 np0005603787 podman[231530]: 2026-01-31 10:13:09.09185374 +0000 UTC m=+0.048118065 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 05:13:09 np0005603787 python3.9[231577]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:13:09 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:13:10 np0005603787 podman[231602]: 2026-01-31 10:13:10.853614757 +0000 UTC m=+0.079093019 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 31 05:13:11 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:13:12 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:13:13 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:13:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:13:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:13:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:13:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:13:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:13:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:13:14 np0005603787 python3.9[231756]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 31 05:13:15 np0005603787 python3.9[231909]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 05:13:15 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:13:15 np0005603787 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 31 05:13:15 np0005603787 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 31 05:13:16 np0005603787 python3.9[232069]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 05:13:17 np0005603787 systemd-logind[786]: New session 51 of user zuul.
Jan 31 05:13:17 np0005603787 systemd[1]: Started Session 51 of User zuul.
Jan 31 05:13:17 np0005603787 systemd-logind[786]: Session 51 logged out. Waiting for processes to exit.
Jan 31 05:13:17 np0005603787 systemd[1]: session-51.scope: Deactivated successfully.
Jan 31 05:13:17 np0005603787 systemd-logind[786]: Removed session 51.
Jan 31 05:13:17 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:13:17 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:13:17 np0005603787 python3.9[232256]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:13:18 np0005603787 python3.9[232377]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769854397.4447842-986-188725454698725/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:13:18 np0005603787 python3.9[232527]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:13:19 np0005603787 python3.9[232603]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:13:19 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:13:19 np0005603787 python3.9[232753]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:13:20 np0005603787 python3.9[232874]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769854399.4682527-986-61579420314989/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:13:20 np0005603787 python3.9[233024]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:13:21 np0005603787 python3.9[233145]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769854400.5035753-986-209148020385546/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:13:21 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:13:21 np0005603787 python3.9[233295]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:13:22 np0005603787 python3.9[233416]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769854401.527966-986-215081365297487/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:13:22 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:13:22 np0005603787 python3.9[233566]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:13:23 np0005603787 python3.9[233687]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769854402.5642574-986-202057139232215/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:13:23 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:13:24 np0005603787 python3.9[233839]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:13:24 np0005603787 python3.9[233991]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:13:25 np0005603787 python3.9[234143]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:13:25 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:13:25 np0005603787 python3.9[234295]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:13:26 np0005603787 python3.9[234418]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1769854405.3640501-1093-172956028949074/.source _original_basename=.cu7_8alv follow=False checksum=471a06ec20ac1e86f864adcc1a77e6cea180bd90 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 31 05:13:26 np0005603787 python3.9[234570]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:13:27 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:13:27 np0005603787 python3.9[234722]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:13:27 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:13:28 np0005603787 python3.9[234843]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769854407.1152546-1119-57769829863463/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:13:28 np0005603787 python3.9[234993]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:13:29 np0005603787 python3.9[235114]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769854408.1673858-1134-130465087614171/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:13:29 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:13:30 np0005603787 python3.9[235266]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 31 05:13:30 np0005603787 python3.9[235418]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 05:13:31 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:13:31 np0005603787 python3[235570]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 05:13:32 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:13:33 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:13:35 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:13:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:13:37.051 154765 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:13:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:13:37.052 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:13:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:13:37.053 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:13:37 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:13:37 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:13:39 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:13:41 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:13:41 np0005603787 podman[235708]: 2026-01-31 10:13:41.438836259 +0000 UTC m=+1.654237024 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 05:13:41 np0005603787 podman[235583]: 2026-01-31 10:13:41.654215848 +0000 UTC m=+9.652974367 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 31 05:13:41 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:13:41 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:13:41 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:13:41 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:13:41 np0005603787 podman[235729]: 2026-01-31 10:13:41.674831431 +0000 UTC m=+0.208391630 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 05:13:41 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:13:41 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:13:41 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:13:41 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:13:41 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:13:41 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:13:41 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:13:41 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:13:41 np0005603787 podman[235819]: 2026-01-31 10:13:41.827564841 +0000 UTC m=+0.074097714 container create 7cab5131c71435321ba596ccc071cf8e35f3f9f8f7acbd9aa1972618f406aa3f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute_init, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 05:13:41 np0005603787 podman[235819]: 2026-01-31 10:13:41.776633121 +0000 UTC m=+0.023166014 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 31 05:13:41 np0005603787 python3[235570]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 31 05:13:42 np0005603787 podman[235894]: 2026-01-31 10:13:42.025889446 +0000 UTC m=+0.039008496 container create b0282a08d4561745280287c9e5ac9ef014119b064be313853e85ad0e5cdc45ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bohr, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:13:42 np0005603787 systemd[1]: Started libpod-conmon-b0282a08d4561745280287c9e5ac9ef014119b064be313853e85ad0e5cdc45ab.scope.
Jan 31 05:13:42 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:13:42 np0005603787 podman[235894]: 2026-01-31 10:13:42.007576616 +0000 UTC m=+0.020695656 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:13:42 np0005603787 podman[235894]: 2026-01-31 10:13:42.114771562 +0000 UTC m=+0.127890612 container init b0282a08d4561745280287c9e5ac9ef014119b064be313853e85ad0e5cdc45ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bohr, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:13:42 np0005603787 podman[235894]: 2026-01-31 10:13:42.121790464 +0000 UTC m=+0.134909484 container start b0282a08d4561745280287c9e5ac9ef014119b064be313853e85ad0e5cdc45ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bohr, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 05:13:42 np0005603787 podman[235894]: 2026-01-31 10:13:42.125040663 +0000 UTC m=+0.138159713 container attach b0282a08d4561745280287c9e5ac9ef014119b064be313853e85ad0e5cdc45ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bohr, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 05:13:42 np0005603787 frosty_bohr[235935]: 167 167
Jan 31 05:13:42 np0005603787 systemd[1]: libpod-b0282a08d4561745280287c9e5ac9ef014119b064be313853e85ad0e5cdc45ab.scope: Deactivated successfully.
Jan 31 05:13:42 np0005603787 podman[235894]: 2026-01-31 10:13:42.126425681 +0000 UTC m=+0.139544711 container died b0282a08d4561745280287c9e5ac9ef014119b064be313853e85ad0e5cdc45ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bohr, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 05:13:42 np0005603787 systemd[1]: var-lib-containers-storage-overlay-d884f51992ef84e5e69538f1acd6ece75f46d9d840abba886ce943e4b5b568a9-merged.mount: Deactivated successfully.
Jan 31 05:13:42 np0005603787 podman[235894]: 2026-01-31 10:13:42.169109876 +0000 UTC m=+0.182228896 container remove b0282a08d4561745280287c9e5ac9ef014119b064be313853e85ad0e5cdc45ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bohr, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 05:13:42 np0005603787 systemd[1]: libpod-conmon-b0282a08d4561745280287c9e5ac9ef014119b064be313853e85ad0e5cdc45ab.scope: Deactivated successfully.
Jan 31 05:13:42 np0005603787 podman[236032]: 2026-01-31 10:13:42.302609411 +0000 UTC m=+0.042351258 container create bb35c6026631ad6513d63249592193ffb3179debb361a5694aac7bee7b138b2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 05:13:42 np0005603787 systemd[1]: Started libpod-conmon-bb35c6026631ad6513d63249592193ffb3179debb361a5694aac7bee7b138b2e.scope.
Jan 31 05:13:42 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:13:42 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/646a20de2885d85be8a9bc77c41e205f6bf86f39a8fde46369fc39d364e8d594/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:13:42 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/646a20de2885d85be8a9bc77c41e205f6bf86f39a8fde46369fc39d364e8d594/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:13:42 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/646a20de2885d85be8a9bc77c41e205f6bf86f39a8fde46369fc39d364e8d594/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:13:42 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/646a20de2885d85be8a9bc77c41e205f6bf86f39a8fde46369fc39d364e8d594/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:13:42 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/646a20de2885d85be8a9bc77c41e205f6bf86f39a8fde46369fc39d364e8d594/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:13:42 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:13:42 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:13:42 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:13:42 np0005603787 podman[236032]: 2026-01-31 10:13:42.370521415 +0000 UTC m=+0.110263272 container init bb35c6026631ad6513d63249592193ffb3179debb361a5694aac7bee7b138b2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:13:42 np0005603787 podman[236032]: 2026-01-31 10:13:42.283263543 +0000 UTC m=+0.023005440 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:13:42 np0005603787 podman[236032]: 2026-01-31 10:13:42.380951329 +0000 UTC m=+0.120693176 container start bb35c6026631ad6513d63249592193ffb3179debb361a5694aac7bee7b138b2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:13:42 np0005603787 podman[236032]: 2026-01-31 10:13:42.384683361 +0000 UTC m=+0.124425208 container attach bb35c6026631ad6513d63249592193ffb3179debb361a5694aac7bee7b138b2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 05:13:42 np0005603787 python3.9[236108]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:13:42 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:13:42 np0005603787 suspicious_mcclintock[236075]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:13:42 np0005603787 suspicious_mcclintock[236075]: --> All data devices are unavailable
Jan 31 05:13:42 np0005603787 systemd[1]: libpod-bb35c6026631ad6513d63249592193ffb3179debb361a5694aac7bee7b138b2e.scope: Deactivated successfully.
Jan 31 05:13:42 np0005603787 podman[236032]: 2026-01-31 10:13:42.752893124 +0000 UTC m=+0.492635011 container died bb35c6026631ad6513d63249592193ffb3179debb361a5694aac7bee7b138b2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_mcclintock, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 05:13:42 np0005603787 systemd[1]: var-lib-containers-storage-overlay-646a20de2885d85be8a9bc77c41e205f6bf86f39a8fde46369fc39d364e8d594-merged.mount: Deactivated successfully.
Jan 31 05:13:42 np0005603787 podman[236032]: 2026-01-31 10:13:42.818951567 +0000 UTC m=+0.558693434 container remove bb35c6026631ad6513d63249592193ffb3179debb361a5694aac7bee7b138b2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_mcclintock, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 05:13:42 np0005603787 systemd[1]: libpod-conmon-bb35c6026631ad6513d63249592193ffb3179debb361a5694aac7bee7b138b2e.scope: Deactivated successfully.
Jan 31 05:13:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_10:13:43
Jan 31 05:13:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:13:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] do_upmap
Jan 31 05:13:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] pools ['images', 'vms', '.mgr', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'backups']
Jan 31 05:13:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:13:43 np0005603787 podman[236349]: 2026-01-31 10:13:43.224402987 +0000 UTC m=+0.021619882 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:13:43 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:13:43 np0005603787 python3.9[236358]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 31 05:13:43 np0005603787 podman[236349]: 2026-01-31 10:13:43.70363764 +0000 UTC m=+0.500854505 container create a9eec13f5d905b35426cfc006d3671d832eab3408a04b7144911035de4489710 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_mendel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 31 05:13:43 np0005603787 systemd[1]: Started libpod-conmon-a9eec13f5d905b35426cfc006d3671d832eab3408a04b7144911035de4489710.scope.
Jan 31 05:13:43 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:13:43 np0005603787 podman[236349]: 2026-01-31 10:13:43.804030351 +0000 UTC m=+0.601247216 container init a9eec13f5d905b35426cfc006d3671d832eab3408a04b7144911035de4489710 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_mendel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:13:43 np0005603787 podman[236349]: 2026-01-31 10:13:43.814112056 +0000 UTC m=+0.611328921 container start a9eec13f5d905b35426cfc006d3671d832eab3408a04b7144911035de4489710 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:13:43 np0005603787 great_mendel[236415]: 167 167
Jan 31 05:13:43 np0005603787 systemd[1]: libpod-a9eec13f5d905b35426cfc006d3671d832eab3408a04b7144911035de4489710.scope: Deactivated successfully.
Jan 31 05:13:43 np0005603787 podman[236349]: 2026-01-31 10:13:43.819116673 +0000 UTC m=+0.616333568 container attach a9eec13f5d905b35426cfc006d3671d832eab3408a04b7144911035de4489710 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 05:13:43 np0005603787 podman[236349]: 2026-01-31 10:13:43.819392391 +0000 UTC m=+0.616609256 container died a9eec13f5d905b35426cfc006d3671d832eab3408a04b7144911035de4489710 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_mendel, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 05:13:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:13:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:13:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:13:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:13:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:13:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:13:43 np0005603787 systemd[1]: var-lib-containers-storage-overlay-7e3fec210233f6d2a20f936c2ba7e41170197fc7826b7d75e8af81d00049f8b1-merged.mount: Deactivated successfully.
Jan 31 05:13:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:13:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:13:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:13:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:13:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:13:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:13:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:13:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:13:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:13:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:13:43 np0005603787 podman[236349]: 2026-01-31 10:13:43.95818359 +0000 UTC m=+0.755400455 container remove a9eec13f5d905b35426cfc006d3671d832eab3408a04b7144911035de4489710 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_mendel, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 05:13:43 np0005603787 systemd[1]: libpod-conmon-a9eec13f5d905b35426cfc006d3671d832eab3408a04b7144911035de4489710.scope: Deactivated successfully.
Jan 31 05:13:44 np0005603787 podman[236546]: 2026-01-31 10:13:44.072214182 +0000 UTC m=+0.036867527 container create e7a8573e6684bcb8f7b1921b5794ff546d9755b9d10760c8b6ed830e104745d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:13:44 np0005603787 systemd[1]: Started libpod-conmon-e7a8573e6684bcb8f7b1921b5794ff546d9755b9d10760c8b6ed830e104745d8.scope.
Jan 31 05:13:44 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:13:44 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/452838c92cf18dfc68e4a20405daaea7ca2c12d70f567102e4fcd08846953373/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:13:44 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/452838c92cf18dfc68e4a20405daaea7ca2c12d70f567102e4fcd08846953373/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:13:44 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/452838c92cf18dfc68e4a20405daaea7ca2c12d70f567102e4fcd08846953373/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:13:44 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/452838c92cf18dfc68e4a20405daaea7ca2c12d70f567102e4fcd08846953373/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:13:44 np0005603787 podman[236546]: 2026-01-31 10:13:44.055321791 +0000 UTC m=+0.019975156 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:13:44 np0005603787 podman[236546]: 2026-01-31 10:13:44.152463864 +0000 UTC m=+0.117117269 container init e7a8573e6684bcb8f7b1921b5794ff546d9755b9d10760c8b6ed830e104745d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 05:13:44 np0005603787 podman[236546]: 2026-01-31 10:13:44.159842085 +0000 UTC m=+0.124495440 container start e7a8573e6684bcb8f7b1921b5794ff546d9755b9d10760c8b6ed830e104745d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_chatelet, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 05:13:44 np0005603787 podman[236546]: 2026-01-31 10:13:44.169452418 +0000 UTC m=+0.134105813 container attach e7a8573e6684bcb8f7b1921b5794ff546d9755b9d10760c8b6ed830e104745d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_chatelet, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 05:13:44 np0005603787 python3.9[236540]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]: {
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:    "0": [
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:        {
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:            "devices": [
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "/dev/loop3"
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:            ],
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:            "lv_name": "ceph_lv0",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:            "lv_size": "21470642176",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:            "name": "ceph_lv0",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:            "tags": {
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "ceph.cluster_name": "ceph",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "ceph.crush_device_class": "",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "ceph.encrypted": "0",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "ceph.objectstore": "bluestore",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "ceph.osd_id": "0",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "ceph.type": "block",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "ceph.vdo": "0",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "ceph.with_tpm": "0"
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:            },
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:            "type": "block",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:            "vg_name": "ceph_vg0"
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:        }
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:    ],
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:    "1": [
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:        {
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:            "devices": [
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "/dev/loop4"
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:            ],
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:            "lv_name": "ceph_lv1",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:            "lv_size": "21470642176",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:            "name": "ceph_lv1",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:            "tags": {
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "ceph.cluster_name": "ceph",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "ceph.crush_device_class": "",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "ceph.encrypted": "0",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "ceph.objectstore": "bluestore",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "ceph.osd_id": "1",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "ceph.type": "block",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "ceph.vdo": "0",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "ceph.with_tpm": "0"
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:            },
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:            "type": "block",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:            "vg_name": "ceph_vg1"
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:        }
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:    ],
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:    "2": [
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:        {
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:            "devices": [
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "/dev/loop5"
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:            ],
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:            "lv_name": "ceph_lv2",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:            "lv_size": "21470642176",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:            "name": "ceph_lv2",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:            "tags": {
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "ceph.cluster_name": "ceph",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "ceph.crush_device_class": "",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "ceph.encrypted": "0",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "ceph.objectstore": "bluestore",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "ceph.osd_id": "2",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "ceph.type": "block",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "ceph.vdo": "0",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:                "ceph.with_tpm": "0"
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:            },
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:            "type": "block",
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:            "vg_name": "ceph_vg2"
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:        }
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]:    ]
Jan 31 05:13:44 np0005603787 dazzling_chatelet[236563]: }
Jan 31 05:13:44 np0005603787 systemd[1]: libpod-e7a8573e6684bcb8f7b1921b5794ff546d9755b9d10760c8b6ed830e104745d8.scope: Deactivated successfully.
Jan 31 05:13:44 np0005603787 podman[236546]: 2026-01-31 10:13:44.463141586 +0000 UTC m=+0.427794971 container died e7a8573e6684bcb8f7b1921b5794ff546d9755b9d10760c8b6ed830e104745d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_chatelet, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:13:44 np0005603787 systemd[1]: var-lib-containers-storage-overlay-452838c92cf18dfc68e4a20405daaea7ca2c12d70f567102e4fcd08846953373-merged.mount: Deactivated successfully.
Jan 31 05:13:44 np0005603787 podman[236546]: 2026-01-31 10:13:44.507756143 +0000 UTC m=+0.472409498 container remove e7a8573e6684bcb8f7b1921b5794ff546d9755b9d10760c8b6ed830e104745d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_chatelet, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 05:13:44 np0005603787 systemd[1]: libpod-conmon-e7a8573e6684bcb8f7b1921b5794ff546d9755b9d10760c8b6ed830e104745d8.scope: Deactivated successfully.
Jan 31 05:13:44 np0005603787 podman[236798]: 2026-01-31 10:13:44.903161989 +0000 UTC m=+0.042003738 container create ea33306f936ee9d0eb9988342b9c2fd0aea0c46a1a00e37ec38683e0ea669f33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_clarke, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:13:44 np0005603787 systemd[1]: Started libpod-conmon-ea33306f936ee9d0eb9988342b9c2fd0aea0c46a1a00e37ec38683e0ea669f33.scope.
Jan 31 05:13:44 np0005603787 podman[236798]: 2026-01-31 10:13:44.881899498 +0000 UTC m=+0.020741247 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:13:44 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:13:44 np0005603787 python3[236785]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 05:13:45 np0005603787 podman[236798]: 2026-01-31 10:13:45.001929625 +0000 UTC m=+0.140771394 container init ea33306f936ee9d0eb9988342b9c2fd0aea0c46a1a00e37ec38683e0ea669f33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_clarke, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 05:13:45 np0005603787 podman[236798]: 2026-01-31 10:13:45.006416828 +0000 UTC m=+0.145258567 container start ea33306f936ee9d0eb9988342b9c2fd0aea0c46a1a00e37ec38683e0ea669f33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_clarke, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:13:45 np0005603787 podman[236798]: 2026-01-31 10:13:45.009873422 +0000 UTC m=+0.148715191 container attach ea33306f936ee9d0eb9988342b9c2fd0aea0c46a1a00e37ec38683e0ea669f33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_clarke, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:13:45 np0005603787 silly_clarke[236814]: 167 167
Jan 31 05:13:45 np0005603787 systemd[1]: libpod-ea33306f936ee9d0eb9988342b9c2fd0aea0c46a1a00e37ec38683e0ea669f33.scope: Deactivated successfully.
Jan 31 05:13:45 np0005603787 podman[236798]: 2026-01-31 10:13:45.010944401 +0000 UTC m=+0.149786160 container died ea33306f936ee9d0eb9988342b9c2fd0aea0c46a1a00e37ec38683e0ea669f33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True)
Jan 31 05:13:45 np0005603787 systemd[1]: var-lib-containers-storage-overlay-c95b6e2e9f7af4100c03d6c76d3678cbedacde98f1bee94473a2156a30fb1b67-merged.mount: Deactivated successfully.
Jan 31 05:13:45 np0005603787 podman[236798]: 2026-01-31 10:13:45.048348173 +0000 UTC m=+0.187189922 container remove ea33306f936ee9d0eb9988342b9c2fd0aea0c46a1a00e37ec38683e0ea669f33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:13:45 np0005603787 systemd[1]: libpod-conmon-ea33306f936ee9d0eb9988342b9c2fd0aea0c46a1a00e37ec38683e0ea669f33.scope: Deactivated successfully.
Jan 31 05:13:45 np0005603787 podman[236866]: 2026-01-31 10:13:45.108252047 +0000 UTC m=+0.019363859 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 31 05:13:45 np0005603787 podman[236866]: 2026-01-31 10:13:45.255449716 +0000 UTC m=+0.166561478 container create 5bc5f5138f5fe93b4ed7259007919a70bc3a11777bb50da7e9fa9480c5124a93 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 05:13:45 np0005603787 python3[236785]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 31 05:13:45 np0005603787 podman[236884]: 2026-01-31 10:13:45.267518616 +0000 UTC m=+0.137258548 container create 0fa2dc38dc46b7b59e946577f2e90824e357314233541b7920359a5c72fd7bb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_fermi, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:13:45 np0005603787 podman[236884]: 2026-01-31 10:13:45.215851145 +0000 UTC m=+0.085591037 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:13:45 np0005603787 systemd[1]: Started libpod-conmon-0fa2dc38dc46b7b59e946577f2e90824e357314233541b7920359a5c72fd7bb4.scope.
Jan 31 05:13:45 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:13:45 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/618485e032fd799ade44e87199423fa1463b39a8428d8c9d252268eedd40e2c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:13:45 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/618485e032fd799ade44e87199423fa1463b39a8428d8c9d252268eedd40e2c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:13:45 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/618485e032fd799ade44e87199423fa1463b39a8428d8c9d252268eedd40e2c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:13:45 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/618485e032fd799ade44e87199423fa1463b39a8428d8c9d252268eedd40e2c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:13:45 np0005603787 podman[236884]: 2026-01-31 10:13:45.362398876 +0000 UTC m=+0.232138828 container init 0fa2dc38dc46b7b59e946577f2e90824e357314233541b7920359a5c72fd7bb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:13:45 np0005603787 podman[236884]: 2026-01-31 10:13:45.374717453 +0000 UTC m=+0.244457365 container start 0fa2dc38dc46b7b59e946577f2e90824e357314233541b7920359a5c72fd7bb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_fermi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 05:13:45 np0005603787 podman[236884]: 2026-01-31 10:13:45.378928108 +0000 UTC m=+0.248668070 container attach 0fa2dc38dc46b7b59e946577f2e90824e357314233541b7920359a5c72fd7bb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_fermi, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:13:45 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:13:45 np0005603787 lvm[237159]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:13:45 np0005603787 lvm[237159]: VG ceph_vg2 finished
Jan 31 05:13:45 np0005603787 lvm[237155]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:13:45 np0005603787 lvm[237155]: VG ceph_vg0 finished
Jan 31 05:13:45 np0005603787 lvm[237158]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:13:45 np0005603787 lvm[237158]: VG ceph_vg1 finished
Jan 31 05:13:45 np0005603787 python3.9[237137]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:13:45 np0005603787 beautiful_fermi[236911]: {}
Jan 31 05:13:46 np0005603787 systemd[1]: libpod-0fa2dc38dc46b7b59e946577f2e90824e357314233541b7920359a5c72fd7bb4.scope: Deactivated successfully.
Jan 31 05:13:46 np0005603787 podman[236884]: 2026-01-31 10:13:46.030035994 +0000 UTC m=+0.899775896 container died 0fa2dc38dc46b7b59e946577f2e90824e357314233541b7920359a5c72fd7bb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_fermi, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 05:13:46 np0005603787 systemd[1]: var-lib-containers-storage-overlay-618485e032fd799ade44e87199423fa1463b39a8428d8c9d252268eedd40e2c6-merged.mount: Deactivated successfully.
Jan 31 05:13:46 np0005603787 podman[236884]: 2026-01-31 10:13:46.113263015 +0000 UTC m=+0.983002927 container remove 0fa2dc38dc46b7b59e946577f2e90824e357314233541b7920359a5c72fd7bb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_fermi, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:13:46 np0005603787 systemd[1]: libpod-conmon-0fa2dc38dc46b7b59e946577f2e90824e357314233541b7920359a5c72fd7bb4.scope: Deactivated successfully.
Jan 31 05:13:46 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:13:46 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:13:46 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:13:46 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:13:46 np0005603787 python3.9[237353]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:13:46 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:13:46 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:13:47 np0005603787 python3.9[237504]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769854426.7123811-1230-158690785581465/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:13:47 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:13:47 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:13:47 np0005603787 python3.9[237580]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 05:13:47 np0005603787 systemd[1]: Reloading.
Jan 31 05:13:47 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:13:47 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:13:48 np0005603787 python3.9[237693]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:13:48 np0005603787 systemd[1]: Reloading.
Jan 31 05:13:48 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:13:48 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:13:49 np0005603787 systemd[1]: Starting nova_compute container...
Jan 31 05:13:49 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:13:49 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cfcdd535e70a02093a32a55dc7745cdca766dc59478111104c77008988fc7d9/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 31 05:13:49 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cfcdd535e70a02093a32a55dc7745cdca766dc59478111104c77008988fc7d9/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 31 05:13:49 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cfcdd535e70a02093a32a55dc7745cdca766dc59478111104c77008988fc7d9/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 31 05:13:49 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cfcdd535e70a02093a32a55dc7745cdca766dc59478111104c77008988fc7d9/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 31 05:13:49 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cfcdd535e70a02093a32a55dc7745cdca766dc59478111104c77008988fc7d9/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 31 05:13:49 np0005603787 podman[237733]: 2026-01-31 10:13:49.285932493 +0000 UTC m=+0.157207062 container init 5bc5f5138f5fe93b4ed7259007919a70bc3a11777bb50da7e9fa9480c5124a93 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Jan 31 05:13:49 np0005603787 podman[237733]: 2026-01-31 10:13:49.292240406 +0000 UTC m=+0.163514945 container start 5bc5f5138f5fe93b4ed7259007919a70bc3a11777bb50da7e9fa9480c5124a93 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:13:49 np0005603787 nova_compute[237748]: + sudo -E kolla_set_configs
Jan 31 05:13:49 np0005603787 podman[237733]: nova_compute
Jan 31 05:13:49 np0005603787 systemd[1]: Started nova_compute container.
Jan 31 05:13:49 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:13:49 np0005603787 nova_compute[237748]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 05:13:49 np0005603787 nova_compute[237748]: INFO:__main__:Validating config file
Jan 31 05:13:49 np0005603787 nova_compute[237748]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 05:13:49 np0005603787 nova_compute[237748]: INFO:__main__:Copying service configuration files
Jan 31 05:13:49 np0005603787 nova_compute[237748]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 31 05:13:49 np0005603787 nova_compute[237748]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 31 05:13:49 np0005603787 nova_compute[237748]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 31 05:13:49 np0005603787 nova_compute[237748]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 31 05:13:49 np0005603787 nova_compute[237748]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 31 05:13:49 np0005603787 nova_compute[237748]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 05:13:49 np0005603787 nova_compute[237748]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 05:13:49 np0005603787 nova_compute[237748]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 05:13:49 np0005603787 nova_compute[237748]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 05:13:49 np0005603787 nova_compute[237748]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 05:13:49 np0005603787 nova_compute[237748]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 05:13:49 np0005603787 nova_compute[237748]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 05:13:49 np0005603787 nova_compute[237748]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 05:13:49 np0005603787 nova_compute[237748]: INFO:__main__:Deleting /etc/ceph
Jan 31 05:13:49 np0005603787 nova_compute[237748]: INFO:__main__:Creating directory /etc/ceph
Jan 31 05:13:49 np0005603787 nova_compute[237748]: INFO:__main__:Setting permission for /etc/ceph
Jan 31 05:13:49 np0005603787 nova_compute[237748]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 31 05:13:49 np0005603787 nova_compute[237748]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 05:13:49 np0005603787 nova_compute[237748]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 31 05:13:49 np0005603787 nova_compute[237748]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 05:13:49 np0005603787 nova_compute[237748]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 31 05:13:49 np0005603787 nova_compute[237748]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 05:13:49 np0005603787 nova_compute[237748]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 31 05:13:49 np0005603787 nova_compute[237748]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 05:13:49 np0005603787 nova_compute[237748]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 31 05:13:49 np0005603787 nova_compute[237748]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 31 05:13:49 np0005603787 nova_compute[237748]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 31 05:13:49 np0005603787 nova_compute[237748]: INFO:__main__:Writing out command to execute
Jan 31 05:13:49 np0005603787 nova_compute[237748]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 05:13:49 np0005603787 nova_compute[237748]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 05:13:49 np0005603787 nova_compute[237748]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 31 05:13:49 np0005603787 nova_compute[237748]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 05:13:49 np0005603787 nova_compute[237748]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 05:13:49 np0005603787 nova_compute[237748]: ++ cat /run_command
Jan 31 05:13:49 np0005603787 nova_compute[237748]: + CMD=nova-compute
Jan 31 05:13:49 np0005603787 nova_compute[237748]: + ARGS=
Jan 31 05:13:49 np0005603787 nova_compute[237748]: + sudo kolla_copy_cacerts
Jan 31 05:13:49 np0005603787 nova_compute[237748]: + [[ ! -n '' ]]
Jan 31 05:13:49 np0005603787 nova_compute[237748]: + . kolla_extend_start
Jan 31 05:13:49 np0005603787 nova_compute[237748]: Running command: 'nova-compute'
Jan 31 05:13:49 np0005603787 nova_compute[237748]: + echo 'Running command: '\''nova-compute'\'''
Jan 31 05:13:49 np0005603787 nova_compute[237748]: + umask 0022
Jan 31 05:13:49 np0005603787 nova_compute[237748]: + exec nova-compute
Jan 31 05:13:50 np0005603787 python3.9[237909]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:13:50 np0005603787 python3.9[238059]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:13:51 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:13:51 np0005603787 python3.9[238210]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:13:52 np0005603787 python3.9[238362]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 31 05:13:52 np0005603787 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 05:13:52 np0005603787 nova_compute[237748]: 2026-01-31 10:13:52.584 237752 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 31 05:13:52 np0005603787 nova_compute[237748]: 2026-01-31 10:13:52.585 237752 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 31 05:13:52 np0005603787 nova_compute[237748]: 2026-01-31 10:13:52.585 237752 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 31 05:13:52 np0005603787 nova_compute[237748]: 2026-01-31 10:13:52.585 237752 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Jan 31 05:13:52 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:13:52 np0005603787 nova_compute[237748]: 2026-01-31 10:13:52.735 237752 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:13:52 np0005603787 nova_compute[237748]: 2026-01-31 10:13:52.749 237752 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:13:52 np0005603787 nova_compute[237748]: 2026-01-31 10:13:52.750 237752 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 31 05:13:53 np0005603787 python3.9[238541]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 05:13:53 np0005603787 systemd[1]: Stopping nova_compute container...
Jan 31 05:13:53 np0005603787 systemd[1]: libpod-5bc5f5138f5fe93b4ed7259007919a70bc3a11777bb50da7e9fa9480c5124a93.scope: Deactivated successfully.
Jan 31 05:13:53 np0005603787 systemd[1]: libpod-5bc5f5138f5fe93b4ed7259007919a70bc3a11777bb50da7e9fa9480c5124a93.scope: Consumed 2.317s CPU time.
Jan 31 05:13:53 np0005603787 podman[238545]: 2026-01-31 10:13:53.298561193 +0000 UTC m=+0.072220253 container died 5bc5f5138f5fe93b4ed7259007919a70bc3a11777bb50da7e9fa9480c5124a93 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 05:13:53 np0005603787 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5bc5f5138f5fe93b4ed7259007919a70bc3a11777bb50da7e9fa9480c5124a93-userdata-shm.mount: Deactivated successfully.
Jan 31 05:13:53 np0005603787 systemd[1]: var-lib-containers-storage-overlay-6cfcdd535e70a02093a32a55dc7745cdca766dc59478111104c77008988fc7d9-merged.mount: Deactivated successfully.
Jan 31 05:13:53 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:13:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:13:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:13:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:13:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:13:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:13:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:13:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:13:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:13:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:13:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:13:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:13:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:13:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1786947556520692e-06 of space, bias 4.0, pg target 0.0014144337067824831 quantized to 16 (current 16)
Jan 31 05:13:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:13:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:13:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:13:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:13:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:13:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 05:13:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:13:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:13:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:13:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:13:54 np0005603787 podman[238545]: 2026-01-31 10:13:54.735534624 +0000 UTC m=+1.509193684 container cleanup 5bc5f5138f5fe93b4ed7259007919a70bc3a11777bb50da7e9fa9480c5124a93 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=nova_compute, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:13:54 np0005603787 podman[238545]: nova_compute
Jan 31 05:13:54 np0005603787 podman[238574]: nova_compute
Jan 31 05:13:54 np0005603787 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 31 05:13:54 np0005603787 systemd[1]: Stopped nova_compute container.
Jan 31 05:13:54 np0005603787 systemd[1]: Starting nova_compute container...
Jan 31 05:13:54 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:13:54 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cfcdd535e70a02093a32a55dc7745cdca766dc59478111104c77008988fc7d9/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 31 05:13:54 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cfcdd535e70a02093a32a55dc7745cdca766dc59478111104c77008988fc7d9/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 31 05:13:54 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cfcdd535e70a02093a32a55dc7745cdca766dc59478111104c77008988fc7d9/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 31 05:13:54 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cfcdd535e70a02093a32a55dc7745cdca766dc59478111104c77008988fc7d9/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 31 05:13:54 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cfcdd535e70a02093a32a55dc7745cdca766dc59478111104c77008988fc7d9/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 31 05:13:54 np0005603787 podman[238587]: 2026-01-31 10:13:54.921534832 +0000 UTC m=+0.105157412 container init 5bc5f5138f5fe93b4ed7259007919a70bc3a11777bb50da7e9fa9480c5124a93 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute)
Jan 31 05:13:54 np0005603787 podman[238587]: 2026-01-31 10:13:54.934772104 +0000 UTC m=+0.118394684 container start 5bc5f5138f5fe93b4ed7259007919a70bc3a11777bb50da7e9fa9480c5124a93 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute)
Jan 31 05:13:54 np0005603787 podman[238587]: nova_compute
Jan 31 05:13:54 np0005603787 nova_compute[238603]: + sudo -E kolla_set_configs
Jan 31 05:13:54 np0005603787 systemd[1]: Started nova_compute container.
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Validating config file
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Copying service configuration files
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Deleting /etc/ceph
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Creating directory /etc/ceph
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Setting permission for /etc/ceph
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Writing out command to execute
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 05:13:55 np0005603787 nova_compute[238603]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 05:13:55 np0005603787 nova_compute[238603]: ++ cat /run_command
Jan 31 05:13:55 np0005603787 nova_compute[238603]: + CMD=nova-compute
Jan 31 05:13:55 np0005603787 nova_compute[238603]: + ARGS=
Jan 31 05:13:55 np0005603787 nova_compute[238603]: + sudo kolla_copy_cacerts
Jan 31 05:13:55 np0005603787 nova_compute[238603]: Running command: 'nova-compute'
Jan 31 05:13:55 np0005603787 nova_compute[238603]: + [[ ! -n '' ]]
Jan 31 05:13:55 np0005603787 nova_compute[238603]: + . kolla_extend_start
Jan 31 05:13:55 np0005603787 nova_compute[238603]: + echo 'Running command: '\''nova-compute'\'''
Jan 31 05:13:55 np0005603787 nova_compute[238603]: + umask 0022
Jan 31 05:13:55 np0005603787 nova_compute[238603]: + exec nova-compute
Jan 31 05:13:55 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:13:55 np0005603787 python3.9[238766]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 31 05:13:55 np0005603787 systemd[1]: Started libpod-conmon-7cab5131c71435321ba596ccc071cf8e35f3f9f8f7acbd9aa1972618f406aa3f.scope.
Jan 31 05:13:55 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:13:55 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd082e4875908cfc635b2cc107efc988c4acaca3a8aa5ad0bfb2e199958992d8/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 31 05:13:55 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd082e4875908cfc635b2cc107efc988c4acaca3a8aa5ad0bfb2e199958992d8/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 31 05:13:55 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd082e4875908cfc635b2cc107efc988c4acaca3a8aa5ad0bfb2e199958992d8/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 31 05:13:55 np0005603787 podman[238792]: 2026-01-31 10:13:55.992868531 +0000 UTC m=+0.128581172 container init 7cab5131c71435321ba596ccc071cf8e35f3f9f8f7acbd9aa1972618f406aa3f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:13:56 np0005603787 podman[238792]: 2026-01-31 10:13:56.00128797 +0000 UTC m=+0.137000561 container start 7cab5131c71435321ba596ccc071cf8e35f3f9f8f7acbd9aa1972618f406aa3f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, io.buildah.version=1.41.3, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 05:13:56 np0005603787 python3.9[238766]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 31 05:13:56 np0005603787 nova_compute_init[238814]: INFO:nova_statedir:Applying nova statedir ownership
Jan 31 05:13:56 np0005603787 nova_compute_init[238814]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 31 05:13:56 np0005603787 nova_compute_init[238814]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 31 05:13:56 np0005603787 nova_compute_init[238814]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 31 05:13:56 np0005603787 nova_compute_init[238814]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 31 05:13:56 np0005603787 nova_compute_init[238814]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 31 05:13:56 np0005603787 nova_compute_init[238814]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 31 05:13:56 np0005603787 nova_compute_init[238814]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 31 05:13:56 np0005603787 nova_compute_init[238814]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 31 05:13:56 np0005603787 nova_compute_init[238814]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 31 05:13:56 np0005603787 nova_compute_init[238814]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 31 05:13:56 np0005603787 nova_compute_init[238814]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 31 05:13:56 np0005603787 nova_compute_init[238814]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 31 05:13:56 np0005603787 nova_compute_init[238814]: INFO:nova_statedir:Nova statedir ownership complete
Jan 31 05:13:56 np0005603787 systemd[1]: libpod-7cab5131c71435321ba596ccc071cf8e35f3f9f8f7acbd9aa1972618f406aa3f.scope: Deactivated successfully.
Jan 31 05:13:56 np0005603787 podman[238815]: 2026-01-31 10:13:56.066739327 +0000 UTC m=+0.041191075 container died 7cab5131c71435321ba596ccc071cf8e35f3f9f8f7acbd9aa1972618f406aa3f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, container_name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible)
Jan 31 05:13:56 np0005603787 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7cab5131c71435321ba596ccc071cf8e35f3f9f8f7acbd9aa1972618f406aa3f-userdata-shm.mount: Deactivated successfully.
Jan 31 05:13:56 np0005603787 systemd[1]: var-lib-containers-storage-overlay-dd082e4875908cfc635b2cc107efc988c4acaca3a8aa5ad0bfb2e199958992d8-merged.mount: Deactivated successfully.
Jan 31 05:13:56 np0005603787 podman[238826]: 2026-01-31 10:13:56.151952424 +0000 UTC m=+0.090263916 container cleanup 7cab5131c71435321ba596ccc071cf8e35f3f9f8f7acbd9aa1972618f406aa3f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=edpm, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Jan 31 05:13:56 np0005603787 systemd[1]: libpod-conmon-7cab5131c71435321ba596ccc071cf8e35f3f9f8f7acbd9aa1972618f406aa3f.scope: Deactivated successfully.
Jan 31 05:13:56 np0005603787 systemd[1]: session-50.scope: Deactivated successfully.
Jan 31 05:13:56 np0005603787 systemd[1]: session-50.scope: Consumed 1min 42.806s CPU time.
Jan 31 05:13:56 np0005603787 systemd-logind[786]: Session 50 logged out. Waiting for processes to exit.
Jan 31 05:13:56 np0005603787 systemd-logind[786]: Removed session 50.
Jan 31 05:13:56 np0005603787 nova_compute[238603]: 2026-01-31 10:13:56.940 238607 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 31 05:13:56 np0005603787 nova_compute[238603]: 2026-01-31 10:13:56.941 238607 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 31 05:13:56 np0005603787 nova_compute[238603]: 2026-01-31 10:13:56.941 238607 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 31 05:13:56 np0005603787 nova_compute[238603]: 2026-01-31 10:13:56.941 238607 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.084 238607 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.093 238607 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.093 238607 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 31 05:13:57 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.547 238607 INFO nova.virt.driver [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Jan 31 05:13:57 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.732 238607 INFO nova.compute.provider_config [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.744 238607 DEBUG oslo_concurrency.lockutils [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.745 238607 DEBUG oslo_concurrency.lockutils [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.745 238607 DEBUG oslo_concurrency.lockutils [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.745 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.745 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.746 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.746 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.746 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.746 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.746 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.746 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.746 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.747 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.747 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.747 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.747 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.747 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.747 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.747 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.748 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.748 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.748 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.748 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.748 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.748 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.748 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.749 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.749 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.749 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.749 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.749 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.749 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.750 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.750 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.750 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.750 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.750 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.750 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.750 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.751 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.751 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.751 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.751 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.751 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.751 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.752 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.752 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.752 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.752 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.752 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.752 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.752 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.753 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.753 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.753 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.753 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.753 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.753 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.753 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.754 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.754 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.754 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.754 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.754 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.754 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.754 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.754 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.755 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.755 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.755 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.755 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.755 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.755 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.755 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.756 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.756 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.756 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.756 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.756 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.756 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.756 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.757 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.757 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.757 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.757 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.757 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.757 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.757 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.758 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.758 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.758 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.758 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.758 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.758 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.758 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.759 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.759 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.759 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.759 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.759 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.759 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.759 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.759 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.760 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.760 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.760 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.760 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.760 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.760 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.760 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.761 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.761 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.761 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.761 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.761 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.762 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.762 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.762 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.762 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.762 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.762 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.763 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.763 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.763 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.763 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.763 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.763 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.763 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.763 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.764 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.764 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.764 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.764 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.764 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.764 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.764 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.765 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.765 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.765 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.765 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.765 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.765 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.765 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.766 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.766 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.766 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.766 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.766 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.766 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.766 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.767 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.767 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.767 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.767 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.767 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.767 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.767 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.768 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.768 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.768 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.768 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.768 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.768 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.768 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.769 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.769 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.769 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.769 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.769 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.769 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.769 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.770 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.770 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.770 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.770 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.770 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.770 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.770 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.771 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.771 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.771 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.771 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.771 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.771 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.772 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.772 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.772 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.772 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.772 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.772 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.772 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.773 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.773 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.773 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.773 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.773 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.773 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.773 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.774 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.774 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.774 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.774 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.774 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.774 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.774 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.775 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.775 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.775 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.775 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.775 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.775 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.775 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.775 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.776 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.776 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.776 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.776 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.776 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.776 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.776 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.777 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.777 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.777 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.777 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.777 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.777 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.777 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.778 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.778 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.778 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.778 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.778 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.778 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.778 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.779 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.779 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.779 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.779 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.779 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.779 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.779 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.780 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.780 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.780 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.780 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.780 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.780 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.780 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.780 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.781 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.781 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.781 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.781 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.781 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.781 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.781 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.782 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.782 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.782 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.782 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.782 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.782 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.782 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.782 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.783 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.783 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.783 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.783 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.783 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.783 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.783 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.784 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.784 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.784 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.784 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.784 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.784 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.784 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.785 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.785 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.785 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.785 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.785 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.785 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.785 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.785 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.786 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.786 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.786 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.786 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.786 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.786 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.786 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.787 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.787 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.787 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.787 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.787 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.787 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.787 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.787 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.788 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.788 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.788 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.788 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.788 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.788 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.788 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.789 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.789 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.789 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.789 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.789 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.790 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.790 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.790 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.790 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.790 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.791 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.791 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.791 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.791 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.791 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.791 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.791 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.791 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.792 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.792 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.792 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.792 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.792 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.792 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.792 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.793 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.793 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.793 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.793 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.793 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.793 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.793 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.794 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.794 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.794 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.794 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.794 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.794 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.794 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.795 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.795 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.795 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.795 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.795 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.795 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.795 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.795 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.796 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.796 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.796 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.796 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.796 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.797 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.797 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.797 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.797 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.797 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.797 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.797 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.798 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.798 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.798 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.798 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.798 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.798 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.798 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.798 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.799 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.799 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.799 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.799 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.799 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.799 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.799 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.800 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.800 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.800 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.800 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.800 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.800 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.800 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.801 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.801 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.801 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.801 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.801 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.801 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.801 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.802 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.802 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.802 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.802 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.802 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.802 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.802 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.803 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.803 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.803 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.803 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.803 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.803 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.803 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.804 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.804 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.804 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.804 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.804 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.804 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.804 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.805 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.805 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.805 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.805 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.805 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.805 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.805 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.805 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.806 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.806 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.806 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.806 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.806 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.806 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.806 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.807 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.807 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.807 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.807 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.807 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.807 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.807 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.807 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.808 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.808 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.808 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.808 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.808 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.808 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.809 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.809 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.809 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.809 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.809 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.809 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.810 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.810 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.810 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.810 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.810 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.810 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.811 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.811 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.811 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.811 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.811 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.811 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.811 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.812 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.812 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.812 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.812 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.812 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.812 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.812 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.813 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.813 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.813 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.813 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.813 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.813 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.813 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.814 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.814 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.814 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.814 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.814 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.814 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.814 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.815 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.815 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.815 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.815 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.815 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.815 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.815 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.816 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.816 238607 WARNING oslo_config.cfg [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 31 05:13:57 np0005603787 nova_compute[238603]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 31 05:13:57 np0005603787 nova_compute[238603]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 31 05:13:57 np0005603787 nova_compute[238603]: and ``live_migration_inbound_addr`` respectively.
Jan 31 05:13:57 np0005603787 nova_compute[238603]: ).  Its value may be silently ignored in the future.#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.816 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.816 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.816 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.816 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.817 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.817 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.817 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.817 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.817 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.817 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.817 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.818 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.818 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.818 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.818 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.818 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.818 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.818 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.819 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.rbd_secret_uuid        = 962d77ae-dc67-5de8-89d8-3d1670c67b61 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.819 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.819 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.819 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.819 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.819 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.819 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.820 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.820 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.820 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.820 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.820 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.820 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.821 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.821 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.821 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.821 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.821 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.821 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.822 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.822 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.822 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.822 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.822 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.822 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.823 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.823 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.823 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.823 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.823 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.823 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.824 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.824 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.824 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.824 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.824 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.824 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.824 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.825 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.825 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.825 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.825 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.825 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.826 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.826 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.826 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.826 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.826 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.826 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.826 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.827 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.827 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.827 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.827 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.827 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.827 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.828 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.828 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.828 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.828 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.828 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.828 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.829 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.829 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.829 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.829 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.829 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.830 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.830 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.830 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.830 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.830 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.831 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.831 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.831 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.831 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.831 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.831 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.831 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.832 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.832 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.832 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.832 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.832 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.832 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.833 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.833 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.833 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.833 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.833 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.833 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.833 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.834 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.834 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.834 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.834 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.834 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.834 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.834 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.835 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.835 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.835 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.835 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.835 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.835 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.835 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.836 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.836 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.836 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.836 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.836 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.836 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.837 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.837 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.837 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.837 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.837 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.838 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.838 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.838 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.838 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.838 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.839 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.839 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.839 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.839 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.839 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.839 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.840 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.840 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.840 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.840 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.840 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.840 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.841 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.841 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.841 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.841 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.841 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.841 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.842 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.842 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.842 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.842 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.842 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.842 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.842 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.843 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.843 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.843 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.843 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.843 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.843 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.843 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.844 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.844 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.844 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.844 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.844 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.844 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.844 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.845 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.845 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.845 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.845 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.845 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.845 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.846 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.846 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.846 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.846 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.846 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.846 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.847 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.847 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.847 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.847 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.847 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.847 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.848 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.848 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.848 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.848 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.848 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.848 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.848 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.849 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.849 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.849 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.849 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.849 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.849 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.849 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.850 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.850 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.850 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.850 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.850 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.851 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.851 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.851 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.851 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.851 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.852 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.852 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.852 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.852 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.852 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.852 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.853 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.853 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.853 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.853 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.853 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.853 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.854 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.854 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.854 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.854 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.854 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.855 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.855 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.855 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.855 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.855 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.856 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.856 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.856 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.856 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.857 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.857 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.857 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.857 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.858 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.858 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.858 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.858 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.858 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.859 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.859 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.859 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.859 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.859 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.860 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.860 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.860 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.860 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.861 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.861 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.861 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.861 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.861 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.861 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.862 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.862 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.862 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.862 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.862 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.863 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.863 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.863 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.863 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.863 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.864 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.864 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.864 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.864 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.864 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.865 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.865 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.865 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.865 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.865 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.866 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.866 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.866 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.866 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.866 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.866 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.867 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.867 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.867 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.867 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.867 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.867 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.868 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.868 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.868 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.868 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.868 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.869 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.869 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.869 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.869 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.869 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.869 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.869 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.870 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.870 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.870 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.870 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.870 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.871 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.871 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.871 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.871 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.871 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.872 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.872 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.872 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.872 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.872 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.872 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.872 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.873 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.873 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.873 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.873 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.873 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.873 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.874 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.874 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.874 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.874 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.874 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.874 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.875 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.875 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.875 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.875 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.875 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.875 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.876 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.876 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.876 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.876 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.876 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.877 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.877 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.877 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.877 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.877 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.877 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.877 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.878 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.878 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.878 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.878 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.878 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.878 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.879 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.879 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.879 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.879 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.879 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.880 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.880 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.880 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.880 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.880 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.880 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.881 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.881 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.881 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.881 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.882 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.882 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.882 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.882 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.882 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.882 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.882 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.883 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.883 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.883 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.883 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.883 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.883 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.884 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.884 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.884 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.884 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.884 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.884 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.885 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.885 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.885 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.885 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.885 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.886 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.886 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.886 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.886 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.886 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.886 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.887 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.887 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.887 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.887 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.887 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.887 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.888 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.888 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.888 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.888 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.888 238607 DEBUG oslo_service.service [None req-1379aab1-5cce-477a-8d08-6aa883598e07 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.889 238607 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.906 238607 DEBUG nova.virt.libvirt.host [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.906 238607 DEBUG nova.virt.libvirt.host [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.907 238607 DEBUG nova.virt.libvirt.host [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Jan 31 05:13:57 np0005603787 nova_compute[238603]: 2026-01-31 10:13:57.907 238607 DEBUG nova.virt.libvirt.host [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Jan 31 05:13:57 np0005603787 systemd[1]: Starting libvirt QEMU daemon...
Jan 31 05:13:57 np0005603787 systemd[1]: Started libvirt QEMU daemon.
Jan 31 05:13:58 np0005603787 nova_compute[238603]: 2026-01-31 10:13:58.006 238607 DEBUG nova.virt.libvirt.host [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fc73ecc5220> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Jan 31 05:13:58 np0005603787 nova_compute[238603]: 2026-01-31 10:13:58.008 238607 DEBUG nova.virt.libvirt.host [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fc73ecc5220> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Jan 31 05:13:58 np0005603787 nova_compute[238603]: 2026-01-31 10:13:58.010 238607 INFO nova.virt.libvirt.driver [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Connection event '1' reason 'None'#033[00m
Jan 31 05:13:58 np0005603787 nova_compute[238603]: 2026-01-31 10:13:58.029 238607 WARNING nova.virt.libvirt.driver [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Jan 31 05:13:58 np0005603787 nova_compute[238603]: 2026-01-31 10:13:58.030 238607 DEBUG nova.virt.libvirt.volume.mount [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Jan 31 05:13:58 np0005603787 nova_compute[238603]: 2026-01-31 10:13:58.895 238607 INFO nova.virt.libvirt.host [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Libvirt host capabilities <capabilities>
Jan 31 05:13:58 np0005603787 nova_compute[238603]: 
Jan 31 05:13:58 np0005603787 nova_compute[238603]:  <host>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    <uuid>85b121ac-a71f-4df5-9fa2-0ab94d362cec</uuid>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    <cpu>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <arch>x86_64</arch>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model>EPYC-Rome-v4</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <vendor>AMD</vendor>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <microcode version='16777317'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <signature family='23' model='49' stepping='0'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <maxphysaddr mode='emulate' bits='40'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature name='x2apic'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature name='tsc-deadline'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature name='osxsave'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature name='hypervisor'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature name='tsc_adjust'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature name='spec-ctrl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature name='stibp'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature name='arch-capabilities'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature name='ssbd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature name='cmp_legacy'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature name='topoext'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature name='virt-ssbd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature name='lbrv'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature name='tsc-scale'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature name='vmcb-clean'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature name='pause-filter'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature name='pfthreshold'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature name='svme-addr-chk'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature name='rdctl-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature name='skip-l1dfl-vmentry'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature name='mds-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature name='pschange-mc-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <pages unit='KiB' size='4'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <pages unit='KiB' size='2048'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <pages unit='KiB' size='1048576'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    </cpu>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    <power_management>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <suspend_mem/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    </power_management>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    <iommu support='no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    <migration_features>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <live/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <uri_transports>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <uri_transport>tcp</uri_transport>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <uri_transport>rdma</uri_transport>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </uri_transports>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    </migration_features>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    <topology>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <cells num='1'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <cell id='0'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:          <memory unit='KiB'>7864292</memory>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:          <pages unit='KiB' size='4'>1966073</pages>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:          <pages unit='KiB' size='2048'>0</pages>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:          <pages unit='KiB' size='1048576'>0</pages>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:          <distances>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:            <sibling id='0' value='10'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:          </distances>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:          <cpus num='8'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:          </cpus>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        </cell>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </cells>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    </topology>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    <cache>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    </cache>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    <secmodel>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model>selinux</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <doi>0</doi>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    </secmodel>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    <secmodel>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model>dac</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <doi>0</doi>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <baselabel type='kvm'>+107:+107</baselabel>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <baselabel type='qemu'>+107:+107</baselabel>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    </secmodel>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:  </host>
Jan 31 05:13:58 np0005603787 nova_compute[238603]: 
Jan 31 05:13:58 np0005603787 nova_compute[238603]:  <guest>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    <os_type>hvm</os_type>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    <arch name='i686'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <wordsize>32</wordsize>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <domain type='qemu'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <domain type='kvm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    </arch>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    <features>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <pae/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <nonpae/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <acpi default='on' toggle='yes'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <apic default='on' toggle='no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <cpuselection/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <deviceboot/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <disksnapshot default='on' toggle='no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <externalSnapshot/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    </features>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:  </guest>
Jan 31 05:13:58 np0005603787 nova_compute[238603]: 
Jan 31 05:13:58 np0005603787 nova_compute[238603]:  <guest>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    <os_type>hvm</os_type>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    <arch name='x86_64'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <wordsize>64</wordsize>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <domain type='qemu'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <domain type='kvm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    </arch>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    <features>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <acpi default='on' toggle='yes'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <apic default='on' toggle='no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <cpuselection/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <deviceboot/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <disksnapshot default='on' toggle='no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <externalSnapshot/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    </features>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:  </guest>
Jan 31 05:13:58 np0005603787 nova_compute[238603]: 
Jan 31 05:13:58 np0005603787 nova_compute[238603]: </capabilities>
Jan 31 05:13:58 np0005603787 nova_compute[238603]: #033[00m
Jan 31 05:13:58 np0005603787 nova_compute[238603]: 2026-01-31 10:13:58.904 238607 DEBUG nova.virt.libvirt.host [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 31 05:13:58 np0005603787 nova_compute[238603]: 2026-01-31 10:13:58.931 238607 DEBUG nova.virt.libvirt.host [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 31 05:13:58 np0005603787 nova_compute[238603]: <domainCapabilities>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:  <path>/usr/libexec/qemu-kvm</path>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:  <domain>kvm</domain>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:  <arch>i686</arch>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:  <vcpu max='240'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:  <iothreads supported='yes'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:  <os supported='yes'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    <enum name='firmware'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    <loader supported='yes'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <enum name='type'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>rom</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>pflash</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <enum name='readonly'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>yes</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>no</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <enum name='secure'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>no</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    </loader>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:  </os>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:  <cpu>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    <mode name='host-passthrough' supported='yes'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <enum name='hostPassthroughMigratable'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>on</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>off</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    </mode>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    <mode name='maximum' supported='yes'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <enum name='maximumMigratable'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>on</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>off</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    </mode>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    <mode name='host-model' supported='yes'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <vendor>AMD</vendor>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature policy='require' name='x2apic'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature policy='require' name='tsc-deadline'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature policy='require' name='hypervisor'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature policy='require' name='tsc_adjust'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature policy='require' name='spec-ctrl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature policy='require' name='stibp'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature policy='require' name='ssbd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature policy='require' name='cmp_legacy'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature policy='require' name='overflow-recov'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature policy='require' name='succor'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature policy='require' name='ibrs'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature policy='require' name='amd-ssbd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature policy='require' name='virt-ssbd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature policy='require' name='lbrv'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature policy='require' name='tsc-scale'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature policy='require' name='vmcb-clean'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature policy='require' name='flushbyasid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature policy='require' name='pause-filter'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature policy='require' name='pfthreshold'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature policy='require' name='svme-addr-chk'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <feature policy='disable' name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    </mode>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    <mode name='custom' supported='yes'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Broadwell'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Broadwell-IBRS'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Broadwell-noTSX'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Broadwell-v1'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Broadwell-v2'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Broadwell-v3'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Broadwell-v4'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Cascadelake-Server'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Cascadelake-Server-v1'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Cascadelake-Server-v2'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Cascadelake-Server-v3'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Cascadelake-Server-v4'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Cascadelake-Server-v5'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='ClearwaterForest'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx-ifma'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx-ne-convert'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx-vnni-int16'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx-vnni-int8'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='bhi-ctrl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='bhi-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='cmpccxadd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ddpd-u'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='intel-psfd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ipred-ctrl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='lam'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='mcdt-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pbrsb-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='prefetchiti'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='rrsba-ctrl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='sha512'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='sm3'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='sm4'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='ClearwaterForest-v1'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx-ifma'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx-ne-convert'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx-vnni-int16'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx-vnni-int8'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='bhi-ctrl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='bhi-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='cmpccxadd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ddpd-u'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='intel-psfd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ipred-ctrl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='lam'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='mcdt-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pbrsb-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='prefetchiti'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='rrsba-ctrl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='sha512'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='sm3'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='sm4'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Cooperlake'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Cooperlake-v1'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Cooperlake-v2'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Denverton'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='mpx'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Denverton-v1'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='mpx'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Denverton-v2'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Denverton-v3'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Dhyana-v2'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Genoa'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='amd-psfd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='auto-ibrs'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='no-nested-data-bp'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='null-sel-clr-base'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='stibp-always-on'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Genoa-v1'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='amd-psfd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='auto-ibrs'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='no-nested-data-bp'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='null-sel-clr-base'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='stibp-always-on'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Genoa-v2'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='amd-psfd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='auto-ibrs'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fs-gs-base-ns'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='no-nested-data-bp'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='null-sel-clr-base'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='perfmon-v2'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='stibp-always-on'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Milan'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Milan-v1'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Milan-v2'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='amd-psfd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='no-nested-data-bp'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='null-sel-clr-base'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='stibp-always-on'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Milan-v3'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='amd-psfd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='no-nested-data-bp'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='null-sel-clr-base'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='stibp-always-on'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Rome'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Rome-v1'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Rome-v2'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Rome-v3'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Turin'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='amd-psfd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='auto-ibrs'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-vp2intersect'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fs-gs-base-ns'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ibpb-brtype'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='no-nested-data-bp'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='null-sel-clr-base'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='perfmon-v2'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='prefetchi'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='sbpb'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='srso-user-kernel-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='stibp-always-on'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Turin-v1'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='amd-psfd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='auto-ibrs'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-vp2intersect'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fs-gs-base-ns'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ibpb-brtype'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='no-nested-data-bp'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='null-sel-clr-base'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='perfmon-v2'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='prefetchi'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='sbpb'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='srso-user-kernel-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='stibp-always-on'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='EPYC-v3'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='EPYC-v4'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='EPYC-v5'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='GraniteRapids'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='amx-bf16'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='amx-fp16'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='amx-int8'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='amx-tile'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-fp16'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrc'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fzrm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='mcdt-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pbrsb-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='prefetchiti'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='tsx-ldtrk'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xfd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='GraniteRapids-v1'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='amx-bf16'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='amx-fp16'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='amx-int8'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='amx-tile'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-fp16'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrc'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fzrm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='mcdt-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pbrsb-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='prefetchiti'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='tsx-ldtrk'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xfd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='GraniteRapids-v2'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='amx-bf16'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='amx-fp16'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='amx-int8'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='amx-tile'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx10'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx10-128'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx10-256'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx10-512'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-fp16'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrc'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fzrm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='mcdt-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pbrsb-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='prefetchiti'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='tsx-ldtrk'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xfd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='GraniteRapids-v3'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='amx-bf16'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='amx-fp16'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='amx-int8'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='amx-tile'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx10'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx10-128'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx10-256'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx10-512'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-fp16'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrc'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fzrm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='mcdt-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pbrsb-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='prefetchiti'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='tsx-ldtrk'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xfd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Haswell'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Haswell-IBRS'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Haswell-noTSX'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Haswell-v1'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Haswell-v2'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Haswell-v3'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Haswell-v4'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Icelake-Server'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Icelake-Server-noTSX'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Icelake-Server-v1'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Icelake-Server-v2'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Icelake-Server-v3'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Icelake-Server-v4'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Icelake-Server-v5'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Icelake-Server-v6'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Icelake-Server-v7'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='IvyBridge'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='IvyBridge-IBRS'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='IvyBridge-v1'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='IvyBridge-v2'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='KnightsMill'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-4fmaps'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-4vnniw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512er'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512pf'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='KnightsMill-v1'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-4fmaps'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-4vnniw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512er'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512pf'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Opteron_G4'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fma4'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xop'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Opteron_G4-v1'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fma4'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xop'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Opteron_G5'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fma4'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='tbm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xop'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Opteron_G5-v1'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fma4'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='tbm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xop'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='SapphireRapids'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='amx-bf16'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='amx-int8'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='amx-tile'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-fp16'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrc'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fzrm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='tsx-ldtrk'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xfd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='SapphireRapids-v1'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='amx-bf16'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='amx-int8'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='amx-tile'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-fp16'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrc'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fzrm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='tsx-ldtrk'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xfd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='SapphireRapids-v2'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='amx-bf16'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='amx-int8'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='amx-tile'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-fp16'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrc'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fzrm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='tsx-ldtrk'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xfd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='SapphireRapids-v3'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='amx-bf16'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='amx-int8'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='amx-tile'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-fp16'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrc'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fzrm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='tsx-ldtrk'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xfd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='SapphireRapids-v4'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='amx-bf16'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='amx-int8'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='amx-tile'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-fp16'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrc'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fzrm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='tsx-ldtrk'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xfd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='SierraForest'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx-ifma'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx-ne-convert'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx-vnni-int8'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='cmpccxadd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='mcdt-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pbrsb-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='SierraForest-v1'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx-ifma'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx-ne-convert'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx-vnni-int8'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='cmpccxadd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='mcdt-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pbrsb-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='SierraForest-v2'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx-ifma'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx-ne-convert'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx-vnni-int8'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='bhi-ctrl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='cmpccxadd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='intel-psfd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ipred-ctrl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='lam'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='mcdt-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pbrsb-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='rrsba-ctrl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='SierraForest-v3'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx-ifma'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx-ne-convert'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx-vnni-int8'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='bhi-ctrl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='cmpccxadd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='intel-psfd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ipred-ctrl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='lam'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='mcdt-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pbrsb-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='rrsba-ctrl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Client'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Client-IBRS'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Client-v1'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Client-v2'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Client-v3'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Client-v4'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Server'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Server-IBRS'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Server-v1'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Server-v2'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Server-v3'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Server-v4'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Server-v5'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Snowridge'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='core-capability'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='mpx'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='split-lock-detect'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Snowridge-v1'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='core-capability'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='mpx'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='split-lock-detect'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Snowridge-v2'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='core-capability'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='split-lock-detect'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Snowridge-v3'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='core-capability'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='split-lock-detect'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='Snowridge-v4'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='athlon'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='3dnow'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='3dnowext'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='athlon-v1'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='3dnow'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='3dnowext'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='core2duo'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='core2duo-v1'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='coreduo'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='coreduo-v1'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='n270'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='n270-v1'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='phenom'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='3dnow'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='3dnowext'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <blockers model='phenom-v1'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='3dnow'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <feature name='3dnowext'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    </mode>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:  </cpu>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:  <memoryBacking supported='yes'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    <enum name='sourceType'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <value>file</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <value>anonymous</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <value>memfd</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    </enum>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:  </memoryBacking>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:  <devices>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    <disk supported='yes'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <enum name='diskDevice'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>disk</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>cdrom</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>floppy</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>lun</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <enum name='bus'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>ide</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>fdc</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>scsi</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>virtio</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>usb</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>sata</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <enum name='model'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>virtio</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>virtio-transitional</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>virtio-non-transitional</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    </disk>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    <graphics supported='yes'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <enum name='type'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>vnc</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>egl-headless</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>dbus</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    </graphics>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    <video supported='yes'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <enum name='modelType'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>vga</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>cirrus</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>virtio</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>none</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>bochs</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>ramfb</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    </video>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    <hostdev supported='yes'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <enum name='mode'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>subsystem</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <enum name='startupPolicy'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>default</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>mandatory</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>requisite</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>optional</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <enum name='subsysType'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>usb</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>pci</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>scsi</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <enum name='capsType'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <enum name='pciBackend'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    </hostdev>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    <rng supported='yes'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <enum name='model'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>virtio</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>virtio-transitional</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>virtio-non-transitional</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <enum name='backendModel'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>random</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>egd</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>builtin</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    </rng>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    <filesystem supported='yes'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <enum name='driverType'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>path</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>handle</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>virtiofs</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    </filesystem>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    <tpm supported='yes'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <enum name='model'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>tpm-tis</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>tpm-crb</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <enum name='backendModel'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>emulator</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>external</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <enum name='backendVersion'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>2.0</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    </tpm>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    <redirdev supported='yes'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <enum name='bus'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>usb</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    </redirdev>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    <channel supported='yes'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <enum name='type'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>pty</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>unix</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    </channel>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    <crypto supported='yes'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <enum name='model'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <enum name='type'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>qemu</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <enum name='backendModel'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>builtin</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    </crypto>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    <interface supported='yes'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <enum name='backendType'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>default</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>passt</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    </interface>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    <panic supported='yes'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <enum name='model'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>isa</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>hyperv</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    </panic>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    <console supported='yes'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      <enum name='type'>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>null</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>vc</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>pty</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>dev</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>file</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>pipe</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>stdio</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>udp</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>tcp</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>unix</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>qemu-vdagent</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:        <value>dbus</value>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    </console>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:  </devices>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:  <features>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    <gic supported='no'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    <vmcoreinfo supported='yes'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    <genid supported='yes'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    <backingStoreInput supported='yes'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    <backup supported='yes'/>
Jan 31 05:13:58 np0005603787 nova_compute[238603]:    <async-teardown supported='yes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <s390-pv supported='no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <ps2 supported='yes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <tdx supported='no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <sev supported='no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <sgx supported='no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <hyperv supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='features'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>relaxed</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>vapic</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>spinlocks</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>vpindex</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>runtime</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>synic</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>stimer</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>reset</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>vendor_id</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>frequencies</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>reenlightenment</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>tlbflush</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>ipi</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>avic</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>emsr_bitmap</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>xmm_input</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <defaults>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <spinlocks>4095</spinlocks>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <stimer_direct>on</stimer_direct>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <tlbflush_direct>on</tlbflush_direct>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <tlbflush_extended>on</tlbflush_extended>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </defaults>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </hyperv>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <launchSecurity supported='no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  </features>
Jan 31 05:13:59 np0005603787 nova_compute[238603]: </domainCapabilities>
Jan 31 05:13:59 np0005603787 nova_compute[238603]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 31 05:13:59 np0005603787 nova_compute[238603]: 2026-01-31 10:13:58.939 238607 DEBUG nova.virt.libvirt.host [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 31 05:13:59 np0005603787 nova_compute[238603]: <domainCapabilities>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  <path>/usr/libexec/qemu-kvm</path>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  <domain>kvm</domain>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  <arch>i686</arch>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  <vcpu max='4096'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  <iothreads supported='yes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  <os supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <enum name='firmware'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <loader supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='type'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>rom</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>pflash</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='readonly'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>yes</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>no</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='secure'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>no</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </loader>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  </os>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  <cpu>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <mode name='host-passthrough' supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='hostPassthroughMigratable'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>on</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>off</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </mode>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <mode name='maximum' supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='maximumMigratable'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>on</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>off</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </mode>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <mode name='host-model' supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <vendor>AMD</vendor>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='x2apic'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='tsc-deadline'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='hypervisor'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='tsc_adjust'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='spec-ctrl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='stibp'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='ssbd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='cmp_legacy'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='overflow-recov'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='succor'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='ibrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='amd-ssbd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='virt-ssbd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='lbrv'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='tsc-scale'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='vmcb-clean'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='flushbyasid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='pause-filter'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='pfthreshold'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='svme-addr-chk'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='disable' name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </mode>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <mode name='custom' supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Broadwell'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Broadwell-IBRS'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Broadwell-noTSX'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Broadwell-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Broadwell-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Broadwell-v3'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Broadwell-v4'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Cascadelake-Server'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Cascadelake-Server-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Cascadelake-Server-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Cascadelake-Server-v3'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Cascadelake-Server-v4'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Cascadelake-Server-v5'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='ClearwaterForest'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-ne-convert'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni-int16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bhi-ctrl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bhi-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cmpccxadd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ddpd-u'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='intel-psfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ipred-ctrl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='lam'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='mcdt-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pbrsb-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='prefetchiti'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rrsba-ctrl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sha512'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sm3'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sm4'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='ClearwaterForest-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-ne-convert'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni-int16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bhi-ctrl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bhi-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cmpccxadd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ddpd-u'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='intel-psfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ipred-ctrl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='lam'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='mcdt-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pbrsb-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='prefetchiti'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rrsba-ctrl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sha512'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sm3'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sm4'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Cooperlake'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Cooperlake-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Cooperlake-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Denverton'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='mpx'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Denverton-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='mpx'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Denverton-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Denverton-v3'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Dhyana-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Genoa'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amd-psfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='auto-ibrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='no-nested-data-bp'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='null-sel-clr-base'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='stibp-always-on'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Genoa-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amd-psfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='auto-ibrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='no-nested-data-bp'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='null-sel-clr-base'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='stibp-always-on'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Genoa-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amd-psfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='auto-ibrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fs-gs-base-ns'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='no-nested-data-bp'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='null-sel-clr-base'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='perfmon-v2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='stibp-always-on'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Milan'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Milan-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Milan-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amd-psfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='no-nested-data-bp'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='null-sel-clr-base'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='stibp-always-on'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Milan-v3'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amd-psfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='no-nested-data-bp'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='null-sel-clr-base'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='stibp-always-on'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Rome'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Rome-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Rome-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Rome-v3'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Turin'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amd-psfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='auto-ibrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vp2intersect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fs-gs-base-ns'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibpb-brtype'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='no-nested-data-bp'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='null-sel-clr-base'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='perfmon-v2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='prefetchi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbpb'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='srso-user-kernel-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='stibp-always-on'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Turin-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amd-psfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='auto-ibrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vp2intersect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fs-gs-base-ns'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibpb-brtype'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='no-nested-data-bp'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='null-sel-clr-base'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='perfmon-v2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='prefetchi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbpb'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='srso-user-kernel-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='stibp-always-on'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-v3'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-v4'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-v5'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='GraniteRapids'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-fp16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-tile'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-fp16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrc'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fzrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='mcdt-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pbrsb-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='prefetchiti'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='tsx-ldtrk'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='GraniteRapids-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-fp16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-tile'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-fp16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrc'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fzrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='mcdt-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pbrsb-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='prefetchiti'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='tsx-ldtrk'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='GraniteRapids-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-fp16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-tile'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx10'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx10-128'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx10-256'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx10-512'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-fp16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrc'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fzrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='mcdt-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pbrsb-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='prefetchiti'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='tsx-ldtrk'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='GraniteRapids-v3'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-fp16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-tile'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx10'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx10-128'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx10-256'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx10-512'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-fp16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrc'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fzrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='mcdt-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pbrsb-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='prefetchiti'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='tsx-ldtrk'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Haswell'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Haswell-IBRS'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Haswell-noTSX'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Haswell-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Haswell-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Haswell-v3'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Haswell-v4'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Icelake-Server'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Icelake-Server-noTSX'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Icelake-Server-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Icelake-Server-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Icelake-Server-v3'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Icelake-Server-v4'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Icelake-Server-v5'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Icelake-Server-v6'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Icelake-Server-v7'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='IvyBridge'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='IvyBridge-IBRS'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='IvyBridge-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='IvyBridge-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='KnightsMill'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-4fmaps'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-4vnniw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512er'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512pf'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='KnightsMill-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-4fmaps'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-4vnniw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512er'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512pf'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Opteron_G4'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fma4'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xop'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Opteron_G4-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fma4'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xop'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Opteron_G5'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fma4'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='tbm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xop'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Opteron_G5-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fma4'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='tbm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xop'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='SapphireRapids'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-tile'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-fp16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrc'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fzrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='tsx-ldtrk'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='SapphireRapids-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-tile'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-fp16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrc'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fzrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='tsx-ldtrk'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='SapphireRapids-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-tile'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-fp16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrc'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fzrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='tsx-ldtrk'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='SapphireRapids-v3'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-tile'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-fp16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrc'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fzrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='tsx-ldtrk'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='SapphireRapids-v4'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-tile'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-fp16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrc'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fzrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='tsx-ldtrk'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='SierraForest'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-ne-convert'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cmpccxadd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='mcdt-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pbrsb-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='SierraForest-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-ne-convert'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cmpccxadd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='mcdt-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pbrsb-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='SierraForest-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-ne-convert'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bhi-ctrl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cmpccxadd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='intel-psfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ipred-ctrl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='lam'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='mcdt-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pbrsb-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rrsba-ctrl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='SierraForest-v3'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-ne-convert'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bhi-ctrl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cmpccxadd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='intel-psfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ipred-ctrl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='lam'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='mcdt-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pbrsb-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rrsba-ctrl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Client'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Client-IBRS'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Client-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Client-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Client-v3'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Client-v4'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Server'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Server-IBRS'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Server-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Server-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Server-v3'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Server-v4'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Server-v5'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Snowridge'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='core-capability'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='mpx'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='split-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Snowridge-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='core-capability'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='mpx'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='split-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Snowridge-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='core-capability'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='split-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Snowridge-v3'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='core-capability'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='split-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Snowridge-v4'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='athlon'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='3dnow'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='3dnowext'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='athlon-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='3dnow'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='3dnowext'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='core2duo'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='core2duo-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='coreduo'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='coreduo-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='n270'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='n270-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='phenom'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='3dnow'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='3dnowext'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='phenom-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='3dnow'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='3dnowext'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </mode>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  </cpu>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  <memoryBacking supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <enum name='sourceType'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <value>file</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <value>anonymous</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <value>memfd</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  </memoryBacking>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  <devices>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <disk supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='diskDevice'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>disk</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>cdrom</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>floppy</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>lun</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='bus'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>fdc</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>scsi</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>virtio</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>usb</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>sata</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='model'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>virtio</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>virtio-transitional</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>virtio-non-transitional</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </disk>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <graphics supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='type'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>vnc</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>egl-headless</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>dbus</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </graphics>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <video supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='modelType'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>vga</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>cirrus</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>virtio</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>none</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>bochs</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>ramfb</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </video>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <hostdev supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='mode'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>subsystem</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='startupPolicy'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>default</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>mandatory</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>requisite</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>optional</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='subsysType'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>usb</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>pci</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>scsi</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='capsType'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='pciBackend'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </hostdev>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <rng supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='model'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>virtio</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>virtio-transitional</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>virtio-non-transitional</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='backendModel'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>random</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>egd</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>builtin</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </rng>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <filesystem supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='driverType'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>path</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>handle</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>virtiofs</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </filesystem>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <tpm supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='model'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>tpm-tis</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>tpm-crb</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='backendModel'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>emulator</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>external</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='backendVersion'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>2.0</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </tpm>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <redirdev supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='bus'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>usb</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </redirdev>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <channel supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='type'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>pty</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>unix</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </channel>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <crypto supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='model'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='type'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>qemu</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='backendModel'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>builtin</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </crypto>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <interface supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='backendType'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>default</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>passt</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </interface>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <panic supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='model'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>isa</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>hyperv</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </panic>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <console supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='type'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>null</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>vc</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>pty</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>dev</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>file</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>pipe</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>stdio</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>udp</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>tcp</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>unix</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>qemu-vdagent</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>dbus</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </console>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  </devices>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  <features>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <gic supported='no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <vmcoreinfo supported='yes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <genid supported='yes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <backingStoreInput supported='yes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <backup supported='yes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <async-teardown supported='yes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <s390-pv supported='no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <ps2 supported='yes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <tdx supported='no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <sev supported='no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <sgx supported='no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <hyperv supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='features'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>relaxed</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>vapic</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>spinlocks</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>vpindex</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>runtime</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>synic</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>stimer</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>reset</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>vendor_id</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>frequencies</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>reenlightenment</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>tlbflush</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>ipi</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>avic</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>emsr_bitmap</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>xmm_input</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <defaults>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <spinlocks>4095</spinlocks>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <stimer_direct>on</stimer_direct>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <tlbflush_direct>on</tlbflush_direct>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <tlbflush_extended>on</tlbflush_extended>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </defaults>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </hyperv>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <launchSecurity supported='no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  </features>
Jan 31 05:13:59 np0005603787 nova_compute[238603]: </domainCapabilities>
Jan 31 05:13:59 np0005603787 nova_compute[238603]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 31 05:13:59 np0005603787 nova_compute[238603]: 2026-01-31 10:13:58.986 238607 DEBUG nova.virt.libvirt.host [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 31 05:13:59 np0005603787 nova_compute[238603]: 2026-01-31 10:13:58.991 238607 DEBUG nova.virt.libvirt.host [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 31 05:13:59 np0005603787 nova_compute[238603]: <domainCapabilities>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  <path>/usr/libexec/qemu-kvm</path>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  <domain>kvm</domain>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  <arch>x86_64</arch>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  <vcpu max='240'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  <iothreads supported='yes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  <os supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <enum name='firmware'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <loader supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='type'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>rom</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>pflash</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='readonly'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>yes</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>no</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='secure'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>no</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </loader>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  </os>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  <cpu>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <mode name='host-passthrough' supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='hostPassthroughMigratable'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>on</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>off</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </mode>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <mode name='maximum' supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='maximumMigratable'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>on</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>off</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </mode>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <mode name='host-model' supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <vendor>AMD</vendor>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='x2apic'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='tsc-deadline'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='hypervisor'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='tsc_adjust'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='spec-ctrl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='stibp'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='ssbd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='cmp_legacy'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='overflow-recov'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='succor'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='ibrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='amd-ssbd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='virt-ssbd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='lbrv'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='tsc-scale'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='vmcb-clean'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='flushbyasid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='pause-filter'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='pfthreshold'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='svme-addr-chk'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='disable' name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </mode>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <mode name='custom' supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Broadwell'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Broadwell-IBRS'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Broadwell-noTSX'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Broadwell-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Broadwell-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Broadwell-v3'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Broadwell-v4'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Cascadelake-Server'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Cascadelake-Server-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Cascadelake-Server-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Cascadelake-Server-v3'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Cascadelake-Server-v4'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Cascadelake-Server-v5'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='ClearwaterForest'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-ne-convert'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni-int16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bhi-ctrl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bhi-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cmpccxadd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ddpd-u'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='intel-psfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ipred-ctrl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='lam'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='mcdt-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pbrsb-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='prefetchiti'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rrsba-ctrl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sha512'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sm3'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sm4'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='ClearwaterForest-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-ne-convert'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni-int16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bhi-ctrl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bhi-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cmpccxadd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ddpd-u'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='intel-psfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ipred-ctrl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='lam'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='mcdt-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pbrsb-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='prefetchiti'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rrsba-ctrl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sha512'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sm3'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sm4'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Cooperlake'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Cooperlake-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Cooperlake-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Denverton'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='mpx'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Denverton-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='mpx'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Denverton-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Denverton-v3'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Dhyana-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Genoa'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amd-psfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='auto-ibrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='no-nested-data-bp'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='null-sel-clr-base'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='stibp-always-on'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Genoa-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amd-psfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='auto-ibrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='no-nested-data-bp'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='null-sel-clr-base'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='stibp-always-on'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Genoa-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amd-psfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='auto-ibrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fs-gs-base-ns'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='no-nested-data-bp'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='null-sel-clr-base'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='perfmon-v2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='stibp-always-on'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Milan'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Milan-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Milan-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amd-psfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='no-nested-data-bp'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='null-sel-clr-base'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='stibp-always-on'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Milan-v3'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amd-psfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='no-nested-data-bp'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='null-sel-clr-base'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='stibp-always-on'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Rome'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Rome-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Rome-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Rome-v3'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Turin'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amd-psfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='auto-ibrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vp2intersect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fs-gs-base-ns'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibpb-brtype'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='no-nested-data-bp'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='null-sel-clr-base'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='perfmon-v2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='prefetchi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbpb'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='srso-user-kernel-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='stibp-always-on'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Turin-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amd-psfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='auto-ibrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vp2intersect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fs-gs-base-ns'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibpb-brtype'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='no-nested-data-bp'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='null-sel-clr-base'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='perfmon-v2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='prefetchi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbpb'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='srso-user-kernel-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='stibp-always-on'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-v3'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-v4'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-v5'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='GraniteRapids'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-fp16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-tile'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-fp16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrc'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fzrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='mcdt-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pbrsb-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='prefetchiti'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='tsx-ldtrk'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='GraniteRapids-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-fp16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-tile'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-fp16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrc'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fzrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='mcdt-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pbrsb-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='prefetchiti'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='tsx-ldtrk'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='GraniteRapids-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-fp16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-tile'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx10'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx10-128'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx10-256'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx10-512'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-fp16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrc'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fzrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='mcdt-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pbrsb-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='prefetchiti'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='tsx-ldtrk'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='GraniteRapids-v3'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-fp16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-tile'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx10'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx10-128'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx10-256'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx10-512'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-fp16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrc'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fzrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='mcdt-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pbrsb-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='prefetchiti'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='tsx-ldtrk'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Haswell'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Haswell-IBRS'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Haswell-noTSX'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Haswell-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Haswell-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Haswell-v3'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Haswell-v4'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Icelake-Server'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Icelake-Server-noTSX'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Icelake-Server-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Icelake-Server-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Icelake-Server-v3'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Icelake-Server-v4'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Icelake-Server-v5'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Icelake-Server-v6'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Icelake-Server-v7'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='IvyBridge'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='IvyBridge-IBRS'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='IvyBridge-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='IvyBridge-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='KnightsMill'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-4fmaps'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-4vnniw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512er'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512pf'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='KnightsMill-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-4fmaps'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-4vnniw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512er'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512pf'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Opteron_G4'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fma4'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xop'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Opteron_G4-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fma4'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xop'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Opteron_G5'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fma4'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='tbm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xop'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Opteron_G5-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fma4'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='tbm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xop'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='SapphireRapids'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-tile'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-fp16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrc'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fzrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='tsx-ldtrk'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='SapphireRapids-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-tile'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-fp16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrc'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fzrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='tsx-ldtrk'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='SapphireRapids-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-tile'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-fp16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrc'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fzrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='tsx-ldtrk'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='SapphireRapids-v3'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-tile'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-fp16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrc'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fzrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='tsx-ldtrk'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='SapphireRapids-v4'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-tile'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-fp16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrc'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fzrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='tsx-ldtrk'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='SierraForest'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-ne-convert'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cmpccxadd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='mcdt-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pbrsb-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='SierraForest-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-ne-convert'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cmpccxadd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='mcdt-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pbrsb-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='SierraForest-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-ne-convert'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bhi-ctrl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cmpccxadd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='intel-psfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ipred-ctrl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='lam'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='mcdt-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pbrsb-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rrsba-ctrl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='SierraForest-v3'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-ne-convert'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bhi-ctrl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cmpccxadd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='intel-psfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ipred-ctrl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='lam'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='mcdt-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pbrsb-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rrsba-ctrl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Client'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Client-IBRS'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Client-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Client-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Client-v3'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Client-v4'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Server'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Server-IBRS'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Server-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Server-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Server-v3'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Server-v4'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Server-v5'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Snowridge'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='core-capability'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='mpx'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='split-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Snowridge-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='core-capability'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='mpx'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='split-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Snowridge-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='core-capability'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='split-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Snowridge-v3'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='core-capability'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='split-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Snowridge-v4'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='athlon'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='3dnow'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='3dnowext'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='athlon-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='3dnow'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='3dnowext'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='core2duo'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='core2duo-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='coreduo'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='coreduo-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='n270'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='n270-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='phenom'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='3dnow'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='3dnowext'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='phenom-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='3dnow'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='3dnowext'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </mode>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  </cpu>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  <memoryBacking supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <enum name='sourceType'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <value>file</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <value>anonymous</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <value>memfd</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  </memoryBacking>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  <devices>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <disk supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='diskDevice'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>disk</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>cdrom</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>floppy</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>lun</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='bus'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>ide</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>fdc</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>scsi</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>virtio</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>usb</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>sata</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='model'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>virtio</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>virtio-transitional</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>virtio-non-transitional</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </disk>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <graphics supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='type'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>vnc</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>egl-headless</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>dbus</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </graphics>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <video supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='modelType'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>vga</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>cirrus</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>virtio</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>none</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>bochs</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>ramfb</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </video>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <hostdev supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='mode'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>subsystem</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='startupPolicy'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>default</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>mandatory</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>requisite</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>optional</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='subsysType'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>usb</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>pci</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>scsi</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='capsType'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='pciBackend'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </hostdev>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <rng supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='model'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>virtio</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>virtio-transitional</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>virtio-non-transitional</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='backendModel'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>random</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>egd</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>builtin</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </rng>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <filesystem supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='driverType'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>path</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>handle</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>virtiofs</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </filesystem>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <tpm supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='model'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>tpm-tis</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>tpm-crb</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='backendModel'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>emulator</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>external</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='backendVersion'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>2.0</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </tpm>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <redirdev supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='bus'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>usb</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </redirdev>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <channel supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='type'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>pty</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>unix</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </channel>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <crypto supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='model'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='type'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>qemu</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='backendModel'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>builtin</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </crypto>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <interface supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='backendType'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>default</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>passt</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </interface>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <panic supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='model'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>isa</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>hyperv</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </panic>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <console supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='type'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>null</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>vc</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>pty</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>dev</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>file</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>pipe</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>stdio</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>udp</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>tcp</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>unix</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>qemu-vdagent</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>dbus</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </console>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  </devices>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  <features>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <gic supported='no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <vmcoreinfo supported='yes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <genid supported='yes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <backingStoreInput supported='yes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <backup supported='yes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <async-teardown supported='yes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <s390-pv supported='no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <ps2 supported='yes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <tdx supported='no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <sev supported='no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <sgx supported='no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <hyperv supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='features'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>relaxed</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>vapic</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>spinlocks</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>vpindex</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>runtime</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>synic</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>stimer</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>reset</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>vendor_id</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>frequencies</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>reenlightenment</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>tlbflush</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>ipi</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>avic</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>emsr_bitmap</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>xmm_input</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <defaults>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <spinlocks>4095</spinlocks>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <stimer_direct>on</stimer_direct>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <tlbflush_direct>on</tlbflush_direct>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <tlbflush_extended>on</tlbflush_extended>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </defaults>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </hyperv>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <launchSecurity supported='no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  </features>
Jan 31 05:13:59 np0005603787 nova_compute[238603]: </domainCapabilities>
Jan 31 05:13:59 np0005603787 nova_compute[238603]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 31 05:13:59 np0005603787 nova_compute[238603]: 2026-01-31 10:13:59.055 238607 DEBUG nova.virt.libvirt.host [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 31 05:13:59 np0005603787 nova_compute[238603]: <domainCapabilities>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  <path>/usr/libexec/qemu-kvm</path>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  <domain>kvm</domain>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  <arch>x86_64</arch>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  <vcpu max='4096'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  <iothreads supported='yes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  <os supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <enum name='firmware'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <value>efi</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <loader supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='type'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>rom</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>pflash</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='readonly'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>yes</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>no</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='secure'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>yes</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>no</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </loader>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  </os>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  <cpu>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <mode name='host-passthrough' supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='hostPassthroughMigratable'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>on</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>off</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </mode>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <mode name='maximum' supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='maximumMigratable'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>on</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>off</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </mode>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <mode name='host-model' supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <vendor>AMD</vendor>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='x2apic'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='tsc-deadline'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='hypervisor'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='tsc_adjust'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='spec-ctrl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='stibp'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='ssbd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='cmp_legacy'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='overflow-recov'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='succor'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='ibrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='amd-ssbd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='virt-ssbd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='lbrv'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='tsc-scale'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='vmcb-clean'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='flushbyasid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='pause-filter'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='pfthreshold'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='svme-addr-chk'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <feature policy='disable' name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </mode>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <mode name='custom' supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Broadwell'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Broadwell-IBRS'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Broadwell-noTSX'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Broadwell-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Broadwell-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Broadwell-v3'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Broadwell-v4'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Cascadelake-Server'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Cascadelake-Server-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Cascadelake-Server-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Cascadelake-Server-v3'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Cascadelake-Server-v4'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Cascadelake-Server-v5'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='ClearwaterForest'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-ne-convert'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni-int16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bhi-ctrl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bhi-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cmpccxadd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ddpd-u'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='intel-psfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ipred-ctrl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='lam'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='mcdt-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pbrsb-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='prefetchiti'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rrsba-ctrl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sha512'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sm3'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sm4'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='ClearwaterForest-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-ne-convert'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni-int16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bhi-ctrl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bhi-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cmpccxadd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ddpd-u'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='intel-psfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ipred-ctrl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='lam'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='mcdt-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pbrsb-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='prefetchiti'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rrsba-ctrl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sha512'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sm3'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sm4'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Cooperlake'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Cooperlake-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Cooperlake-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Denverton'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='mpx'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Denverton-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='mpx'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Denverton-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Denverton-v3'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Dhyana-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Genoa'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amd-psfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='auto-ibrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='no-nested-data-bp'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='null-sel-clr-base'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='stibp-always-on'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Genoa-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amd-psfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='auto-ibrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='no-nested-data-bp'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='null-sel-clr-base'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='stibp-always-on'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Genoa-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amd-psfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='auto-ibrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fs-gs-base-ns'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='no-nested-data-bp'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='null-sel-clr-base'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='perfmon-v2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='stibp-always-on'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Milan'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Milan-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Milan-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amd-psfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='no-nested-data-bp'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='null-sel-clr-base'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='stibp-always-on'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Milan-v3'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amd-psfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='no-nested-data-bp'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='null-sel-clr-base'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='stibp-always-on'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Rome'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Rome-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Rome-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Rome-v3'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Turin'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amd-psfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='auto-ibrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vp2intersect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fs-gs-base-ns'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibpb-brtype'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='no-nested-data-bp'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='null-sel-clr-base'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='perfmon-v2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='prefetchi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbpb'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='srso-user-kernel-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='stibp-always-on'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-Turin-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amd-psfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='auto-ibrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vp2intersect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fs-gs-base-ns'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibpb-brtype'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='no-nested-data-bp'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='null-sel-clr-base'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='perfmon-v2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='prefetchi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbpb'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='srso-user-kernel-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='stibp-always-on'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-v3'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-v4'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='EPYC-v5'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='GraniteRapids'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-fp16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-tile'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-fp16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrc'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fzrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='mcdt-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pbrsb-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='prefetchiti'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='tsx-ldtrk'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='GraniteRapids-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-fp16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-tile'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-fp16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrc'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fzrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='mcdt-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pbrsb-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='prefetchiti'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='tsx-ldtrk'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='GraniteRapids-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-fp16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-tile'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx10'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx10-128'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx10-256'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx10-512'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-fp16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrc'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fzrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='mcdt-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pbrsb-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='prefetchiti'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='tsx-ldtrk'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='GraniteRapids-v3'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-fp16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-tile'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx10'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx10-128'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx10-256'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx10-512'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-fp16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrc'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fzrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='mcdt-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pbrsb-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='prefetchiti'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='tsx-ldtrk'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Haswell'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Haswell-IBRS'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Haswell-noTSX'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Haswell-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Haswell-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Haswell-v3'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Haswell-v4'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Icelake-Server'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Icelake-Server-noTSX'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Icelake-Server-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Icelake-Server-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Icelake-Server-v3'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Icelake-Server-v4'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Icelake-Server-v5'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Icelake-Server-v6'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Icelake-Server-v7'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='IvyBridge'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='IvyBridge-IBRS'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='IvyBridge-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='IvyBridge-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='KnightsMill'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-4fmaps'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-4vnniw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512er'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512pf'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='KnightsMill-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-4fmaps'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-4vnniw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512er'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512pf'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Opteron_G4'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fma4'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xop'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Opteron_G4-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fma4'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xop'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Opteron_G5'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fma4'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='tbm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xop'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Opteron_G5-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fma4'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='tbm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xop'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='SapphireRapids'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-tile'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-fp16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrc'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fzrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='tsx-ldtrk'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='SapphireRapids-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-tile'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-fp16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrc'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fzrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='tsx-ldtrk'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='SapphireRapids-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-tile'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-fp16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrc'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fzrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='tsx-ldtrk'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='SapphireRapids-v3'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-tile'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-fp16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrc'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fzrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='tsx-ldtrk'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='SapphireRapids-v4'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='amx-tile'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-bf16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-fp16'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512-vpopcntdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bitalg'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vbmi2'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrc'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fzrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='la57'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='taa-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='tsx-ldtrk'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='SierraForest'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-ne-convert'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cmpccxadd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='mcdt-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pbrsb-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='SierraForest-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-ne-convert'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cmpccxadd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='mcdt-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pbrsb-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='SierraForest-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-ne-convert'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bhi-ctrl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cmpccxadd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='intel-psfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ipred-ctrl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='lam'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='mcdt-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pbrsb-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rrsba-ctrl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='SierraForest-v3'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-ifma'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-ne-convert'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx-vnni-int8'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bhi-ctrl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='bus-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cmpccxadd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fbsdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='fsrs'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ibrs-all'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='intel-psfd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ipred-ctrl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='lam'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='mcdt-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pbrsb-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='psdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rrsba-ctrl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='sbdr-ssdp-no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='serialize'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vaes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='vpclmulqdq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Client'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Client-IBRS'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Client-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Client-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Client-v3'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Client-v4'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Server'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Server-IBRS'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Server-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Server-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='hle'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='rtm'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Server-v3'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Server-v4'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Skylake-Server-v5'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512bw'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512cd'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512dq'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512f'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='avx512vl'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='invpcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pcid'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='pku'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Snowridge'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='core-capability'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='mpx'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='split-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Snowridge-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='core-capability'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='mpx'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='split-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Snowridge-v2'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='core-capability'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='split-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Snowridge-v3'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='core-capability'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='split-lock-detect'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='Snowridge-v4'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='cldemote'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='erms'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='gfni'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdir64b'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='movdiri'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='xsaves'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='athlon'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='3dnow'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='3dnowext'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='athlon-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='3dnow'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='3dnowext'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='core2duo'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='core2duo-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='coreduo'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='coreduo-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='n270'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='n270-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='ss'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='phenom'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='3dnow'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='3dnowext'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <blockers model='phenom-v1'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='3dnow'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <feature name='3dnowext'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </blockers>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </mode>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  </cpu>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  <memoryBacking supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <enum name='sourceType'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <value>file</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <value>anonymous</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <value>memfd</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  </memoryBacking>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  <devices>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <disk supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='diskDevice'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>disk</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>cdrom</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>floppy</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>lun</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='bus'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>fdc</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>scsi</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>virtio</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>usb</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>sata</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='model'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>virtio</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>virtio-transitional</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>virtio-non-transitional</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </disk>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <graphics supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='type'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>vnc</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>egl-headless</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>dbus</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </graphics>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <video supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='modelType'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>vga</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>cirrus</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>virtio</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>none</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>bochs</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>ramfb</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </video>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <hostdev supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='mode'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>subsystem</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='startupPolicy'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>default</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>mandatory</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>requisite</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>optional</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='subsysType'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>usb</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>pci</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>scsi</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='capsType'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='pciBackend'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </hostdev>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <rng supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='model'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>virtio</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>virtio-transitional</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>virtio-non-transitional</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='backendModel'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>random</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>egd</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>builtin</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </rng>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <filesystem supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='driverType'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>path</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>handle</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>virtiofs</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </filesystem>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <tpm supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='model'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>tpm-tis</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>tpm-crb</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='backendModel'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>emulator</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>external</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='backendVersion'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>2.0</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </tpm>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <redirdev supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='bus'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>usb</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </redirdev>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <channel supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='type'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>pty</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>unix</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </channel>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <crypto supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='model'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='type'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>qemu</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='backendModel'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>builtin</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </crypto>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <interface supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='backendType'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>default</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>passt</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </interface>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <panic supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='model'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>isa</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>hyperv</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </panic>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <console supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='type'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>null</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>vc</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>pty</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>dev</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>file</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>pipe</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>stdio</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>udp</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>tcp</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>unix</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>qemu-vdagent</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>dbus</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </console>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  </devices>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  <features>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <gic supported='no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <vmcoreinfo supported='yes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <genid supported='yes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <backingStoreInput supported='yes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <backup supported='yes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <async-teardown supported='yes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <s390-pv supported='no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <ps2 supported='yes'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <tdx supported='no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <sev supported='no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <sgx supported='no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <hyperv supported='yes'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <enum name='features'>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>relaxed</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>vapic</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>spinlocks</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>vpindex</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>runtime</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>synic</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>stimer</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>reset</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>vendor_id</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>frequencies</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>reenlightenment</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>tlbflush</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>ipi</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>avic</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>emsr_bitmap</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <value>xmm_input</value>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </enum>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      <defaults>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <spinlocks>4095</spinlocks>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <stimer_direct>on</stimer_direct>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <tlbflush_direct>on</tlbflush_direct>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <tlbflush_extended>on</tlbflush_extended>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:      </defaults>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    </hyperv>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:    <launchSecurity supported='no'/>
Jan 31 05:13:59 np0005603787 nova_compute[238603]:  </features>
Jan 31 05:13:59 np0005603787 nova_compute[238603]: </domainCapabilities>
Jan 31 05:13:59 np0005603787 nova_compute[238603]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 31 05:13:59 np0005603787 nova_compute[238603]: 2026-01-31 10:13:59.112 238607 DEBUG nova.virt.libvirt.host [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Jan 31 05:13:59 np0005603787 nova_compute[238603]: 2026-01-31 10:13:59.113 238607 DEBUG nova.virt.libvirt.host [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Jan 31 05:13:59 np0005603787 nova_compute[238603]: 2026-01-31 10:13:59.113 238607 DEBUG nova.virt.libvirt.host [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Jan 31 05:13:59 np0005603787 nova_compute[238603]: 2026-01-31 10:13:59.117 238607 INFO nova.virt.libvirt.host [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Secure Boot support detected#033[00m
Jan 31 05:13:59 np0005603787 nova_compute[238603]: 2026-01-31 10:13:59.119 238607 INFO nova.virt.libvirt.driver [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Jan 31 05:13:59 np0005603787 nova_compute[238603]: 2026-01-31 10:13:59.119 238607 INFO nova.virt.libvirt.driver [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Jan 31 05:13:59 np0005603787 nova_compute[238603]: 2026-01-31 10:13:59.126 238607 DEBUG nova.virt.libvirt.driver [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Jan 31 05:13:59 np0005603787 nova_compute[238603]: 2026-01-31 10:13:59.172 238607 INFO nova.virt.node [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Determined node identity 207962d2-1ba9-4db2-8533-2a30e7131f3e from /var/lib/nova/compute_id#033[00m
Jan 31 05:13:59 np0005603787 nova_compute[238603]: 2026-01-31 10:13:59.193 238607 WARNING nova.compute.manager [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Compute nodes ['207962d2-1ba9-4db2-8533-2a30e7131f3e'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Jan 31 05:13:59 np0005603787 nova_compute[238603]: 2026-01-31 10:13:59.229 238607 INFO nova.compute.manager [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Jan 31 05:13:59 np0005603787 nova_compute[238603]: 2026-01-31 10:13:59.259 238607 WARNING nova.compute.manager [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Jan 31 05:13:59 np0005603787 nova_compute[238603]: 2026-01-31 10:13:59.259 238607 DEBUG oslo_concurrency.lockutils [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:13:59 np0005603787 nova_compute[238603]: 2026-01-31 10:13:59.260 238607 DEBUG oslo_concurrency.lockutils [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:13:59 np0005603787 nova_compute[238603]: 2026-01-31 10:13:59.260 238607 DEBUG oslo_concurrency.lockutils [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:13:59 np0005603787 nova_compute[238603]: 2026-01-31 10:13:59.260 238607 DEBUG nova.compute.resource_tracker [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 05:13:59 np0005603787 nova_compute[238603]: 2026-01-31 10:13:59.260 238607 DEBUG oslo_concurrency.processutils [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:13:59 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:13:59 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:13:59 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/787941143' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:13:59 np0005603787 nova_compute[238603]: 2026-01-31 10:13:59.791 238607 DEBUG oslo_concurrency.processutils [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:13:59 np0005603787 systemd[1]: Starting libvirt nodedev daemon...
Jan 31 05:13:59 np0005603787 systemd[1]: Started libvirt nodedev daemon.
Jan 31 05:14:00 np0005603787 nova_compute[238603]: 2026-01-31 10:14:00.064 238607 WARNING nova.virt.libvirt.driver [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 05:14:00 np0005603787 nova_compute[238603]: 2026-01-31 10:14:00.065 238607 DEBUG nova.compute.resource_tracker [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5190MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 05:14:00 np0005603787 nova_compute[238603]: 2026-01-31 10:14:00.066 238607 DEBUG oslo_concurrency.lockutils [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:14:00 np0005603787 nova_compute[238603]: 2026-01-31 10:14:00.066 238607 DEBUG oslo_concurrency.lockutils [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:14:00 np0005603787 nova_compute[238603]: 2026-01-31 10:14:00.087 238607 WARNING nova.compute.resource_tracker [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] No compute node record for compute-0.ctlplane.example.com:207962d2-1ba9-4db2-8533-2a30e7131f3e: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 207962d2-1ba9-4db2-8533-2a30e7131f3e could not be found.#033[00m
Jan 31 05:14:00 np0005603787 nova_compute[238603]: 2026-01-31 10:14:00.110 238607 INFO nova.compute.resource_tracker [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 207962d2-1ba9-4db2-8533-2a30e7131f3e#033[00m
Jan 31 05:14:00 np0005603787 nova_compute[238603]: 2026-01-31 10:14:00.191 238607 DEBUG nova.compute.resource_tracker [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 05:14:00 np0005603787 nova_compute[238603]: 2026-01-31 10:14:00.191 238607 DEBUG nova.compute.resource_tracker [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 05:14:00 np0005603787 nova_compute[238603]: 2026-01-31 10:14:00.972 238607 INFO nova.scheduler.client.report [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] [req-d705669f-7786-4768-81e8-6208af2db7b7] Created resource provider record via placement API for resource provider with UUID 207962d2-1ba9-4db2-8533-2a30e7131f3e and name compute-0.ctlplane.example.com.#033[00m
Jan 31 05:14:01 np0005603787 nova_compute[238603]: 2026-01-31 10:14:01.322 238607 DEBUG oslo_concurrency.processutils [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:14:01 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:14:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:14:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/838867042' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:14:01 np0005603787 nova_compute[238603]: 2026-01-31 10:14:01.805 238607 DEBUG oslo_concurrency.processutils [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:14:01 np0005603787 nova_compute[238603]: 2026-01-31 10:14:01.809 238607 DEBUG nova.virt.libvirt.host [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 31 05:14:01 np0005603787 nova_compute[238603]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Jan 31 05:14:01 np0005603787 nova_compute[238603]: 2026-01-31 10:14:01.809 238607 INFO nova.virt.libvirt.host [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] kernel doesn't support AMD SEV#033[00m
Jan 31 05:14:01 np0005603787 nova_compute[238603]: 2026-01-31 10:14:01.810 238607 DEBUG nova.compute.provider_tree [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Updating inventory in ProviderTree for provider 207962d2-1ba9-4db2-8533-2a30e7131f3e with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 05:14:01 np0005603787 nova_compute[238603]: 2026-01-31 10:14:01.811 238607 DEBUG nova.virt.libvirt.driver [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 05:14:01 np0005603787 nova_compute[238603]: 2026-01-31 10:14:01.862 238607 DEBUG nova.scheduler.client.report [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Updated inventory for provider 207962d2-1ba9-4db2-8533-2a30e7131f3e with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Jan 31 05:14:01 np0005603787 nova_compute[238603]: 2026-01-31 10:14:01.862 238607 DEBUG nova.compute.provider_tree [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Updating resource provider 207962d2-1ba9-4db2-8533-2a30e7131f3e generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Jan 31 05:14:01 np0005603787 nova_compute[238603]: 2026-01-31 10:14:01.863 238607 DEBUG nova.compute.provider_tree [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Updating inventory in ProviderTree for provider 207962d2-1ba9-4db2-8533-2a30e7131f3e with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 05:14:01 np0005603787 nova_compute[238603]: 2026-01-31 10:14:01.996 238607 DEBUG nova.compute.provider_tree [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Updating resource provider 207962d2-1ba9-4db2-8533-2a30e7131f3e generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Jan 31 05:14:02 np0005603787 nova_compute[238603]: 2026-01-31 10:14:02.024 238607 DEBUG nova.compute.resource_tracker [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 05:14:02 np0005603787 nova_compute[238603]: 2026-01-31 10:14:02.024 238607 DEBUG oslo_concurrency.lockutils [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.958s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:14:02 np0005603787 nova_compute[238603]: 2026-01-31 10:14:02.024 238607 DEBUG nova.service [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Jan 31 05:14:02 np0005603787 nova_compute[238603]: 2026-01-31 10:14:02.137 238607 DEBUG nova.service [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Jan 31 05:14:02 np0005603787 nova_compute[238603]: 2026-01-31 10:14:02.138 238607 DEBUG nova.servicegroup.drivers.db [None req-50cd50b6-8743-4170-9204-1de9e45340aa - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Jan 31 05:14:02 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:14:03 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:14:05 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:14:07 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:14:07 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:14:09 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:14:11 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:14:11 np0005603787 podman[239015]: 2026-01-31 10:14:11.82803433 +0000 UTC m=+0.044248683 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 31 05:14:11 np0005603787 podman[239014]: 2026-01-31 10:14:11.855703392 +0000 UTC m=+0.072022938 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 05:14:12 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:14:13 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:14:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:14:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:14:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:14:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:14:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:14:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:14:15 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:14:17 np0005603787 nova_compute[238603]: 2026-01-31 10:14:17.140 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:14:17 np0005603787 nova_compute[238603]: 2026-01-31 10:14:17.172 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:14:17 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:14:17 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:14:19 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 05:14:19 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2951845800' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 05:14:19 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 05:14:19 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2951845800' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 05:14:19 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 05:14:19 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3561785633' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 05:14:19 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 05:14:19 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3561785633' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 05:14:19 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:14:19 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 05:14:19 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2574633211' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 05:14:19 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 05:14:19 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2574633211' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 05:14:21 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:14:22 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:14:23 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:14:25 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:14:27 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:14:27 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:14:29 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 5 op/s
Jan 31 05:14:31 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 11 op/s
Jan 31 05:14:32 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:14:33 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 31 05:14:35 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 31 05:14:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:14:37.053 154765 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:14:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:14:37.053 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:14:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:14:37.054 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:14:37 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 31 05:14:37 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:14:39 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 31 05:14:41 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 9 op/s
Jan 31 05:14:42 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:14:42 np0005603787 podman[239062]: 2026-01-31 10:14:42.830178368 +0000 UTC m=+0.047785099 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 05:14:42 np0005603787 podman[239061]: 2026-01-31 10:14:42.859963277 +0000 UTC m=+0.083673743 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:14:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_10:14:43
Jan 31 05:14:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:14:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] do_upmap
Jan 31 05:14:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'default.rgw.control', 'vms', 'default.rgw.log', '.rgw.root', 'volumes', 'backups']
Jan 31 05:14:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:14:43 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 4 op/s
Jan 31 05:14:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:14:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:14:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:14:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:14:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:14:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:14:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:14:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:14:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:14:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:14:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:14:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:14:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:14:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:14:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:14:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:14:45 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:14:46 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:14:46 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:14:46 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:14:46 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:14:46 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:14:46 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:14:46 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:14:46 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:14:46 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:14:46 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:14:46 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:14:46 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:14:46 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:14:46 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:14:46 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:14:47 np0005603787 podman[239251]: 2026-01-31 10:14:47.185387862 +0000 UTC m=+0.040400029 container create 663b9ae4323931df8600a14acf36638c0a57c36e62c4be425fb538ac2ebde1fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:14:47 np0005603787 systemd[1]: Started libpod-conmon-663b9ae4323931df8600a14acf36638c0a57c36e62c4be425fb538ac2ebde1fe.scope.
Jan 31 05:14:47 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:14:47 np0005603787 podman[239251]: 2026-01-31 10:14:47.241373462 +0000 UTC m=+0.096385649 container init 663b9ae4323931df8600a14acf36638c0a57c36e62c4be425fb538ac2ebde1fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 05:14:47 np0005603787 podman[239251]: 2026-01-31 10:14:47.246919273 +0000 UTC m=+0.101931440 container start 663b9ae4323931df8600a14acf36638c0a57c36e62c4be425fb538ac2ebde1fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_keldysh, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 05:14:47 np0005603787 podman[239251]: 2026-01-31 10:14:47.250200072 +0000 UTC m=+0.105212239 container attach 663b9ae4323931df8600a14acf36638c0a57c36e62c4be425fb538ac2ebde1fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 05:14:47 np0005603787 loving_keldysh[239267]: 167 167
Jan 31 05:14:47 np0005603787 systemd[1]: libpod-663b9ae4323931df8600a14acf36638c0a57c36e62c4be425fb538ac2ebde1fe.scope: Deactivated successfully.
Jan 31 05:14:47 np0005603787 podman[239251]: 2026-01-31 10:14:47.251335043 +0000 UTC m=+0.106347210 container died 663b9ae4323931df8600a14acf36638c0a57c36e62c4be425fb538ac2ebde1fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 05:14:47 np0005603787 podman[239251]: 2026-01-31 10:14:47.164866744 +0000 UTC m=+0.019878951 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:14:47 np0005603787 systemd[1]: var-lib-containers-storage-overlay-ea84aa4aa6b1a1441ae070caf5fd3f18c6304b0c4b195dc6f011b162f1a7aa24-merged.mount: Deactivated successfully.
Jan 31 05:14:47 np0005603787 podman[239251]: 2026-01-31 10:14:47.287819183 +0000 UTC m=+0.142831350 container remove 663b9ae4323931df8600a14acf36638c0a57c36e62c4be425fb538ac2ebde1fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_keldysh, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:14:47 np0005603787 systemd[1]: libpod-conmon-663b9ae4323931df8600a14acf36638c0a57c36e62c4be425fb538ac2ebde1fe.scope: Deactivated successfully.
Jan 31 05:14:47 np0005603787 podman[239291]: 2026-01-31 10:14:47.423904989 +0000 UTC m=+0.039482063 container create 3e6b7688172e1a8063a9c0280747dfa3e30ab6a920efcf3a76597e3fd9d62caf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_banach, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 05:14:47 np0005603787 systemd[1]: Started libpod-conmon-3e6b7688172e1a8063a9c0280747dfa3e30ab6a920efcf3a76597e3fd9d62caf.scope.
Jan 31 05:14:47 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:14:47 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:14:47 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0607444b7959ca7fb28958d0afcf08663be733d3fc488e01f394f23397c67e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:14:47 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0607444b7959ca7fb28958d0afcf08663be733d3fc488e01f394f23397c67e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:14:47 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0607444b7959ca7fb28958d0afcf08663be733d3fc488e01f394f23397c67e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:14:47 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0607444b7959ca7fb28958d0afcf08663be733d3fc488e01f394f23397c67e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:14:47 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0607444b7959ca7fb28958d0afcf08663be733d3fc488e01f394f23397c67e2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:14:47 np0005603787 podman[239291]: 2026-01-31 10:14:47.405992322 +0000 UTC m=+0.021569416 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:14:47 np0005603787 podman[239291]: 2026-01-31 10:14:47.505617048 +0000 UTC m=+0.121194132 container init 3e6b7688172e1a8063a9c0280747dfa3e30ab6a920efcf3a76597e3fd9d62caf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_banach, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:14:47 np0005603787 podman[239291]: 2026-01-31 10:14:47.522163497 +0000 UTC m=+0.137740571 container start 3e6b7688172e1a8063a9c0280747dfa3e30ab6a920efcf3a76597e3fd9d62caf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_banach, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:14:47 np0005603787 podman[239291]: 2026-01-31 10:14:47.526406183 +0000 UTC m=+0.141983277 container attach 3e6b7688172e1a8063a9c0280747dfa3e30ab6a920efcf3a76597e3fd9d62caf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_banach, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:14:47 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:14:47 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Jan 31 05:14:47 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1183236003' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 31 05:14:47 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14336 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 31 05:14:47 np0005603787 ceph-mgr[75453]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 31 05:14:47 np0005603787 ceph-mgr[75453]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 31 05:14:47 np0005603787 recursing_banach[239308]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:14:47 np0005603787 recursing_banach[239308]: --> All data devices are unavailable
Jan 31 05:14:47 np0005603787 systemd[1]: libpod-3e6b7688172e1a8063a9c0280747dfa3e30ab6a920efcf3a76597e3fd9d62caf.scope: Deactivated successfully.
Jan 31 05:14:47 np0005603787 podman[239291]: 2026-01-31 10:14:47.959000852 +0000 UTC m=+0.574577916 container died 3e6b7688172e1a8063a9c0280747dfa3e30ab6a920efcf3a76597e3fd9d62caf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_banach, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:14:47 np0005603787 systemd[1]: var-lib-containers-storage-overlay-a0607444b7959ca7fb28958d0afcf08663be733d3fc488e01f394f23397c67e2-merged.mount: Deactivated successfully.
Jan 31 05:14:47 np0005603787 podman[239291]: 2026-01-31 10:14:47.998657728 +0000 UTC m=+0.614234792 container remove 3e6b7688172e1a8063a9c0280747dfa3e30ab6a920efcf3a76597e3fd9d62caf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_banach, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:14:48 np0005603787 systemd[1]: libpod-conmon-3e6b7688172e1a8063a9c0280747dfa3e30ab6a920efcf3a76597e3fd9d62caf.scope: Deactivated successfully.
Jan 31 05:14:48 np0005603787 podman[239403]: 2026-01-31 10:14:48.477451082 +0000 UTC m=+0.041395485 container create 31d34f5c52cb0f82e7181ea65ca2ce3515ad28e6c6b9738731f73b11fce4b68c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 05:14:48 np0005603787 systemd[1]: Started libpod-conmon-31d34f5c52cb0f82e7181ea65ca2ce3515ad28e6c6b9738731f73b11fce4b68c.scope.
Jan 31 05:14:48 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:14:48 np0005603787 podman[239403]: 2026-01-31 10:14:48.545906371 +0000 UTC m=+0.109850754 container init 31d34f5c52cb0f82e7181ea65ca2ce3515ad28e6c6b9738731f73b11fce4b68c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_shtern, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Jan 31 05:14:48 np0005603787 podman[239403]: 2026-01-31 10:14:48.550416563 +0000 UTC m=+0.114360916 container start 31d34f5c52cb0f82e7181ea65ca2ce3515ad28e6c6b9738731f73b11fce4b68c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_shtern, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 05:14:48 np0005603787 podman[239403]: 2026-01-31 10:14:48.456833983 +0000 UTC m=+0.020778376 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:14:48 np0005603787 condescending_shtern[239420]: 167 167
Jan 31 05:14:48 np0005603787 systemd[1]: libpod-31d34f5c52cb0f82e7181ea65ca2ce3515ad28e6c6b9738731f73b11fce4b68c.scope: Deactivated successfully.
Jan 31 05:14:48 np0005603787 podman[239403]: 2026-01-31 10:14:48.559042719 +0000 UTC m=+0.122987082 container attach 31d34f5c52cb0f82e7181ea65ca2ce3515ad28e6c6b9738731f73b11fce4b68c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Jan 31 05:14:48 np0005603787 podman[239403]: 2026-01-31 10:14:48.559713916 +0000 UTC m=+0.123658279 container died 31d34f5c52cb0f82e7181ea65ca2ce3515ad28e6c6b9738731f73b11fce4b68c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_shtern, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 05:14:48 np0005603787 systemd[1]: var-lib-containers-storage-overlay-395510eebe62d71402c2619463016f9b3b4e7e34bb09d58ec0a9941a2100529e-merged.mount: Deactivated successfully.
Jan 31 05:14:48 np0005603787 podman[239403]: 2026-01-31 10:14:48.602330794 +0000 UTC m=+0.166275157 container remove 31d34f5c52cb0f82e7181ea65ca2ce3515ad28e6c6b9738731f73b11fce4b68c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_shtern, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 05:14:48 np0005603787 systemd[1]: libpod-conmon-31d34f5c52cb0f82e7181ea65ca2ce3515ad28e6c6b9738731f73b11fce4b68c.scope: Deactivated successfully.
Jan 31 05:14:48 np0005603787 podman[239444]: 2026-01-31 10:14:48.724190614 +0000 UTC m=+0.046025981 container create dd09f70c2c8383feadd97bea3c96c2a5aeb1bac5ceb658a194f6a133ba92b809 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_snyder, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:14:48 np0005603787 systemd[1]: Started libpod-conmon-dd09f70c2c8383feadd97bea3c96c2a5aeb1bac5ceb658a194f6a133ba92b809.scope.
Jan 31 05:14:48 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:14:48 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f78295a86f3bcb1f87e99f00b82db21dad7a298f17b9820012be43a8031450fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:14:48 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f78295a86f3bcb1f87e99f00b82db21dad7a298f17b9820012be43a8031450fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:14:48 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f78295a86f3bcb1f87e99f00b82db21dad7a298f17b9820012be43a8031450fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:14:48 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f78295a86f3bcb1f87e99f00b82db21dad7a298f17b9820012be43a8031450fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:14:48 np0005603787 podman[239444]: 2026-01-31 10:14:48.792921871 +0000 UTC m=+0.114757238 container init dd09f70c2c8383feadd97bea3c96c2a5aeb1bac5ceb658a194f6a133ba92b809 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_snyder, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:14:48 np0005603787 podman[239444]: 2026-01-31 10:14:48.69901656 +0000 UTC m=+0.020851947 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:14:48 np0005603787 podman[239444]: 2026-01-31 10:14:48.797499255 +0000 UTC m=+0.119334622 container start dd09f70c2c8383feadd97bea3c96c2a5aeb1bac5ceb658a194f6a133ba92b809 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:14:48 np0005603787 podman[239444]: 2026-01-31 10:14:48.800507486 +0000 UTC m=+0.122342863 container attach dd09f70c2c8383feadd97bea3c96c2a5aeb1bac5ceb658a194f6a133ba92b809 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]: {
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:    "0": [
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:        {
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:            "devices": [
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "/dev/loop3"
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:            ],
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:            "lv_name": "ceph_lv0",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:            "lv_size": "21470642176",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:            "name": "ceph_lv0",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:            "tags": {
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "ceph.cluster_name": "ceph",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "ceph.crush_device_class": "",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "ceph.encrypted": "0",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "ceph.objectstore": "bluestore",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "ceph.osd_id": "0",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "ceph.type": "block",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "ceph.vdo": "0",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "ceph.with_tpm": "0"
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:            },
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:            "type": "block",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:            "vg_name": "ceph_vg0"
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:        }
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:    ],
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:    "1": [
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:        {
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:            "devices": [
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "/dev/loop4"
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:            ],
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:            "lv_name": "ceph_lv1",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:            "lv_size": "21470642176",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:            "name": "ceph_lv1",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:            "tags": {
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "ceph.cluster_name": "ceph",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "ceph.crush_device_class": "",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "ceph.encrypted": "0",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "ceph.objectstore": "bluestore",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "ceph.osd_id": "1",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "ceph.type": "block",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "ceph.vdo": "0",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "ceph.with_tpm": "0"
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:            },
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:            "type": "block",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:            "vg_name": "ceph_vg1"
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:        }
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:    ],
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:    "2": [
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:        {
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:            "devices": [
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "/dev/loop5"
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:            ],
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:            "lv_name": "ceph_lv2",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:            "lv_size": "21470642176",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:            "name": "ceph_lv2",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:            "tags": {
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "ceph.cluster_name": "ceph",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "ceph.crush_device_class": "",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "ceph.encrypted": "0",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "ceph.objectstore": "bluestore",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "ceph.osd_id": "2",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "ceph.type": "block",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "ceph.vdo": "0",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:                "ceph.with_tpm": "0"
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:            },
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:            "type": "block",
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:            "vg_name": "ceph_vg2"
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:        }
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]:    ]
Jan 31 05:14:49 np0005603787 fervent_snyder[239460]: }
Jan 31 05:14:49 np0005603787 systemd[1]: libpod-dd09f70c2c8383feadd97bea3c96c2a5aeb1bac5ceb658a194f6a133ba92b809.scope: Deactivated successfully.
Jan 31 05:14:49 np0005603787 podman[239444]: 2026-01-31 10:14:49.05764918 +0000 UTC m=+0.379484547 container died dd09f70c2c8383feadd97bea3c96c2a5aeb1bac5ceb658a194f6a133ba92b809 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_snyder, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:14:49 np0005603787 systemd[1]: var-lib-containers-storage-overlay-f78295a86f3bcb1f87e99f00b82db21dad7a298f17b9820012be43a8031450fb-merged.mount: Deactivated successfully.
Jan 31 05:14:49 np0005603787 podman[239444]: 2026-01-31 10:14:49.100277147 +0000 UTC m=+0.422112514 container remove dd09f70c2c8383feadd97bea3c96c2a5aeb1bac5ceb658a194f6a133ba92b809 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_snyder, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 05:14:49 np0005603787 systemd[1]: libpod-conmon-dd09f70c2c8383feadd97bea3c96c2a5aeb1bac5ceb658a194f6a133ba92b809.scope: Deactivated successfully.
Jan 31 05:14:49 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:14:49 np0005603787 podman[239543]: 2026-01-31 10:14:49.502310006 +0000 UTC m=+0.039984706 container create 8c959b01ff05c7dc3c0ff3a502263edcf862809a07eab438d7beaef4ff68ffe8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_raman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:14:49 np0005603787 systemd[1]: Started libpod-conmon-8c959b01ff05c7dc3c0ff3a502263edcf862809a07eab438d7beaef4ff68ffe8.scope.
Jan 31 05:14:49 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:14:49 np0005603787 podman[239543]: 2026-01-31 10:14:49.571720272 +0000 UTC m=+0.109394962 container init 8c959b01ff05c7dc3c0ff3a502263edcf862809a07eab438d7beaef4ff68ffe8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:14:49 np0005603787 podman[239543]: 2026-01-31 10:14:49.577005735 +0000 UTC m=+0.114680415 container start 8c959b01ff05c7dc3c0ff3a502263edcf862809a07eab438d7beaef4ff68ffe8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 05:14:49 np0005603787 jovial_raman[239560]: 167 167
Jan 31 05:14:49 np0005603787 systemd[1]: libpod-8c959b01ff05c7dc3c0ff3a502263edcf862809a07eab438d7beaef4ff68ffe8.scope: Deactivated successfully.
Jan 31 05:14:49 np0005603787 podman[239543]: 2026-01-31 10:14:49.485390787 +0000 UTC m=+0.023065487 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:14:49 np0005603787 podman[239543]: 2026-01-31 10:14:49.58158943 +0000 UTC m=+0.119264140 container attach 8c959b01ff05c7dc3c0ff3a502263edcf862809a07eab438d7beaef4ff68ffe8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3)
Jan 31 05:14:49 np0005603787 podman[239543]: 2026-01-31 10:14:49.58198253 +0000 UTC m=+0.119657210 container died 8c959b01ff05c7dc3c0ff3a502263edcf862809a07eab438d7beaef4ff68ffe8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 05:14:49 np0005603787 systemd[1]: var-lib-containers-storage-overlay-187f9a5a1dc886fbd565bf5bd2de99c385350eb8c6d0a04a61c7fa0b4c2c4ea8-merged.mount: Deactivated successfully.
Jan 31 05:14:49 np0005603787 podman[239543]: 2026-01-31 10:14:49.613469386 +0000 UTC m=+0.151144106 container remove 8c959b01ff05c7dc3c0ff3a502263edcf862809a07eab438d7beaef4ff68ffe8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_raman, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:14:49 np0005603787 systemd[1]: libpod-conmon-8c959b01ff05c7dc3c0ff3a502263edcf862809a07eab438d7beaef4ff68ffe8.scope: Deactivated successfully.
Jan 31 05:14:49 np0005603787 podman[239583]: 2026-01-31 10:14:49.730246487 +0000 UTC m=+0.040673506 container create 218895b1e230b8405a02610b064a8382c1280b0a870f8141552c11d9d5db40f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_wilbur, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:14:49 np0005603787 systemd[1]: Started libpod-conmon-218895b1e230b8405a02610b064a8382c1280b0a870f8141552c11d9d5db40f0.scope.
Jan 31 05:14:49 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:14:49 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6be911808662aac8a190a9b59bf286dd64576fbe503fa3e24f3de77464884ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:14:49 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6be911808662aac8a190a9b59bf286dd64576fbe503fa3e24f3de77464884ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:14:49 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6be911808662aac8a190a9b59bf286dd64576fbe503fa3e24f3de77464884ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:14:49 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6be911808662aac8a190a9b59bf286dd64576fbe503fa3e24f3de77464884ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:14:49 np0005603787 podman[239583]: 2026-01-31 10:14:49.710553712 +0000 UTC m=+0.020980721 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:14:49 np0005603787 podman[239583]: 2026-01-31 10:14:49.824306771 +0000 UTC m=+0.134733760 container init 218895b1e230b8405a02610b064a8382c1280b0a870f8141552c11d9d5db40f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:14:49 np0005603787 podman[239583]: 2026-01-31 10:14:49.829185814 +0000 UTC m=+0.139612803 container start 218895b1e230b8405a02610b064a8382c1280b0a870f8141552c11d9d5db40f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:14:49 np0005603787 podman[239583]: 2026-01-31 10:14:49.832634417 +0000 UTC m=+0.143061436 container attach 218895b1e230b8405a02610b064a8382c1280b0a870f8141552c11d9d5db40f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 05:14:50 np0005603787 lvm[239678]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:14:50 np0005603787 lvm[239678]: VG ceph_vg0 finished
Jan 31 05:14:50 np0005603787 lvm[239679]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:14:50 np0005603787 lvm[239679]: VG ceph_vg1 finished
Jan 31 05:14:50 np0005603787 lvm[239681]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:14:50 np0005603787 lvm[239681]: VG ceph_vg2 finished
Jan 31 05:14:50 np0005603787 tender_wilbur[239600]: {}
Jan 31 05:14:50 np0005603787 systemd[1]: libpod-218895b1e230b8405a02610b064a8382c1280b0a870f8141552c11d9d5db40f0.scope: Deactivated successfully.
Jan 31 05:14:50 np0005603787 podman[239583]: 2026-01-31 10:14:50.531652492 +0000 UTC m=+0.842079491 container died 218895b1e230b8405a02610b064a8382c1280b0a870f8141552c11d9d5db40f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_wilbur, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 05:14:50 np0005603787 systemd[1]: var-lib-containers-storage-overlay-d6be911808662aac8a190a9b59bf286dd64576fbe503fa3e24f3de77464884ce-merged.mount: Deactivated successfully.
Jan 31 05:14:50 np0005603787 podman[239583]: 2026-01-31 10:14:50.573330494 +0000 UTC m=+0.883757473 container remove 218895b1e230b8405a02610b064a8382c1280b0a870f8141552c11d9d5db40f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_wilbur, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 05:14:50 np0005603787 systemd[1]: libpod-conmon-218895b1e230b8405a02610b064a8382c1280b0a870f8141552c11d9d5db40f0.scope: Deactivated successfully.
Jan 31 05:14:50 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:14:50 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:14:50 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:14:50 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:14:50 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:14:50 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:14:51 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:14:52 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:14:52 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Jan 31 05:14:52 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:14:52.665154) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 05:14:52 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Jan 31 05:14:52 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769854492665193, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1762, "num_deletes": 250, "total_data_size": 3008368, "memory_usage": 3045128, "flush_reason": "Manual Compaction"}
Jan 31 05:14:52 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Jan 31 05:14:52 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769854492675875, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1703769, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11826, "largest_seqno": 13587, "table_properties": {"data_size": 1697923, "index_size": 2921, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14657, "raw_average_key_size": 20, "raw_value_size": 1685079, "raw_average_value_size": 2324, "num_data_blocks": 135, "num_entries": 725, "num_filter_entries": 725, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769854292, "oldest_key_time": 1769854292, "file_creation_time": 1769854492, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a9067dea-12fb-43c2-8d5c-dbf66227f0e8", "db_session_id": "EXKALWXQ4I64EWVUMKE5", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:14:52 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 10792 microseconds, and 4806 cpu microseconds.
Jan 31 05:14:52 np0005603787 ceph-mon[75160]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 05:14:52 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:14:52.675941) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1703769 bytes OK
Jan 31 05:14:52 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:14:52.675970) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Jan 31 05:14:52 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:14:52.678769) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Jan 31 05:14:52 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:14:52.678796) EVENT_LOG_v1 {"time_micros": 1769854492678786, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 05:14:52 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:14:52.678821) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 05:14:52 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 3000848, prev total WAL file size 3000848, number of live WAL files 2.
Jan 31 05:14:52 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:14:52 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:14:52.679698) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353031' seq:0, type:0; will stop at (end)
Jan 31 05:14:52 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 05:14:52 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1663KB)], [29(7987KB)]
Jan 31 05:14:52 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769854492679767, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9882844, "oldest_snapshot_seqno": -1}
Jan 31 05:14:52 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4062 keys, 7792708 bytes, temperature: kUnknown
Jan 31 05:14:52 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769854492732937, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7792708, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7763665, "index_size": 17797, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10181, "raw_key_size": 96690, "raw_average_key_size": 23, "raw_value_size": 7688569, "raw_average_value_size": 1892, "num_data_blocks": 774, "num_entries": 4062, "num_filter_entries": 4062, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769853439, "oldest_key_time": 0, "file_creation_time": 1769854492, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a9067dea-12fb-43c2-8d5c-dbf66227f0e8", "db_session_id": "EXKALWXQ4I64EWVUMKE5", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:14:52 np0005603787 ceph-mon[75160]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 05:14:52 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:14:52.733195) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7792708 bytes
Jan 31 05:14:52 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:14:52.734998) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 185.6 rd, 146.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.8 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(10.4) write-amplify(4.6) OK, records in: 4480, records dropped: 418 output_compression: NoCompression
Jan 31 05:14:52 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:14:52.735016) EVENT_LOG_v1 {"time_micros": 1769854492735007, "job": 12, "event": "compaction_finished", "compaction_time_micros": 53235, "compaction_time_cpu_micros": 17157, "output_level": 6, "num_output_files": 1, "total_output_size": 7792708, "num_input_records": 4480, "num_output_records": 4062, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 05:14:52 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:14:52 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769854492735273, "job": 12, "event": "table_file_deletion", "file_number": 31}
Jan 31 05:14:52 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:14:52 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769854492736061, "job": 12, "event": "table_file_deletion", "file_number": 29}
Jan 31 05:14:52 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:14:52.679576) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:14:52 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:14:52.736193) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:14:52 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:14:52.736199) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:14:52 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:14:52.736201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:14:52 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:14:52.736203) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:14:52 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:14:52.736205) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:14:53 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:14:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:14:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:14:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:14:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:14:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:14:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:14:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:14:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:14:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:14:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:14:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:14:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:14:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1786947556520692e-06 of space, bias 4.0, pg target 0.0014144337067824831 quantized to 16 (current 16)
Jan 31 05:14:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:14:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:14:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:14:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:14:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:14:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 05:14:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:14:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:14:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:14:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:14:55 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:14:57 np0005603787 nova_compute[238603]: 2026-01-31 10:14:57.105 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:14:57 np0005603787 nova_compute[238603]: 2026-01-31 10:14:57.106 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:14:57 np0005603787 nova_compute[238603]: 2026-01-31 10:14:57.106 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 05:14:57 np0005603787 nova_compute[238603]: 2026-01-31 10:14:57.106 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 05:14:57 np0005603787 nova_compute[238603]: 2026-01-31 10:14:57.121 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 05:14:57 np0005603787 nova_compute[238603]: 2026-01-31 10:14:57.121 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:14:57 np0005603787 nova_compute[238603]: 2026-01-31 10:14:57.122 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:14:57 np0005603787 nova_compute[238603]: 2026-01-31 10:14:57.123 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:14:57 np0005603787 nova_compute[238603]: 2026-01-31 10:14:57.123 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:14:57 np0005603787 nova_compute[238603]: 2026-01-31 10:14:57.123 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:14:57 np0005603787 nova_compute[238603]: 2026-01-31 10:14:57.124 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:14:57 np0005603787 nova_compute[238603]: 2026-01-31 10:14:57.124 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 05:14:57 np0005603787 nova_compute[238603]: 2026-01-31 10:14:57.125 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:14:57 np0005603787 nova_compute[238603]: 2026-01-31 10:14:57.190 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:14:57 np0005603787 nova_compute[238603]: 2026-01-31 10:14:57.190 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:14:57 np0005603787 nova_compute[238603]: 2026-01-31 10:14:57.190 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:14:57 np0005603787 nova_compute[238603]: 2026-01-31 10:14:57.190 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 05:14:57 np0005603787 nova_compute[238603]: 2026-01-31 10:14:57.191 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:14:57 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:14:57 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:14:57 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:14:57 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3247132293' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:14:57 np0005603787 nova_compute[238603]: 2026-01-31 10:14:57.753 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:14:57 np0005603787 nova_compute[238603]: 2026-01-31 10:14:57.896 238607 WARNING nova.virt.libvirt.driver [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 05:14:57 np0005603787 nova_compute[238603]: 2026-01-31 10:14:57.897 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5133MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 05:14:57 np0005603787 nova_compute[238603]: 2026-01-31 10:14:57.897 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:14:57 np0005603787 nova_compute[238603]: 2026-01-31 10:14:57.897 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:14:57 np0005603787 nova_compute[238603]: 2026-01-31 10:14:57.965 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 05:14:57 np0005603787 nova_compute[238603]: 2026-01-31 10:14:57.965 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 05:14:57 np0005603787 nova_compute[238603]: 2026-01-31 10:14:57.997 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:14:58 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:14:58 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1295584497' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:14:58 np0005603787 nova_compute[238603]: 2026-01-31 10:14:58.517 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:14:58 np0005603787 nova_compute[238603]: 2026-01-31 10:14:58.521 238607 DEBUG nova.compute.provider_tree [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed in ProviderTree for provider: 207962d2-1ba9-4db2-8533-2a30e7131f3e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 05:14:58 np0005603787 nova_compute[238603]: 2026-01-31 10:14:58.547 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed for provider 207962d2-1ba9-4db2-8533-2a30e7131f3e based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 05:14:58 np0005603787 nova_compute[238603]: 2026-01-31 10:14:58.583 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 05:14:58 np0005603787 nova_compute[238603]: 2026-01-31 10:14:58.583 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:14:59 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:15:01 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:15:02 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:15:03 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:15:05 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Jan 31 05:15:05 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/547102194' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 31 05:15:05 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 31 05:15:05 np0005603787 ceph-mgr[75453]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 31 05:15:05 np0005603787 ceph-mgr[75453]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 31 05:15:05 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:15:07 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:15:07 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:15:09 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:15:11 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:15:12 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:15:13 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:15:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:15:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:15:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:15:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:15:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:15:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:15:13 np0005603787 podman[239766]: 2026-01-31 10:15:13.858537302 +0000 UTC m=+0.057161484 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 31 05:15:13 np0005603787 podman[239765]: 2026-01-31 10:15:13.919320978 +0000 UTC m=+0.121963216 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 05:15:15 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:15:17 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:15:17 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:15:19 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:15:21 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:15:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 05:15:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3176457697' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 05:15:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 05:15:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3176457697' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 05:15:22 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:15:23 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:15:25 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:15:27 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:15:27 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:15:29 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:15:31 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:15:32 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:15:33 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:15:35 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:15:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:15:37.054 154765 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:15:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:15:37.055 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:15:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:15:37.055 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:15:37 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:15:37 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:15:39 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:15:41 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:15:42 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:15:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_10:15:43
Jan 31 05:15:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:15:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] do_upmap
Jan 31 05:15:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes', 'backups', 'default.rgw.meta', 'images', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms']
Jan 31 05:15:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:15:43 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:15:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:15:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:15:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:15:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:15:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:15:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:15:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:15:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:15:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:15:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:15:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:15:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:15:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:15:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:15:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:15:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:15:44 np0005603787 podman[239812]: 2026-01-31 10:15:44.831262074 +0000 UTC m=+0.051303075 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 05:15:44 np0005603787 podman[239811]: 2026-01-31 10:15:44.853484893 +0000 UTC m=+0.076072087 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 05:15:45 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:15:47 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:15:47 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:15:49 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:15:51 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:15:51 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:15:51 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:15:51 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:15:51 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:15:51 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:15:51 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:15:51 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:15:51 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:15:51 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:15:51 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:15:51 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:15:51 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:15:51 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:15:51 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:15:51 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:15:51 np0005603787 podman[239998]: 2026-01-31 10:15:51.823805932 +0000 UTC m=+0.063317960 container create 46ec206d2a81f2eee4b9b47c3e87677172c20829028435844dfa405f667e840b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:15:51 np0005603787 systemd[1]: Started libpod-conmon-46ec206d2a81f2eee4b9b47c3e87677172c20829028435844dfa405f667e840b.scope.
Jan 31 05:15:51 np0005603787 podman[239998]: 2026-01-31 10:15:51.795619003 +0000 UTC m=+0.035131081 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:15:51 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:15:51 np0005603787 podman[239998]: 2026-01-31 10:15:51.917914167 +0000 UTC m=+0.157426245 container init 46ec206d2a81f2eee4b9b47c3e87677172c20829028435844dfa405f667e840b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:15:51 np0005603787 podman[239998]: 2026-01-31 10:15:51.926811193 +0000 UTC m=+0.166323221 container start 46ec206d2a81f2eee4b9b47c3e87677172c20829028435844dfa405f667e840b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_ishizaka, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Jan 31 05:15:51 np0005603787 systemd[1]: libpod-46ec206d2a81f2eee4b9b47c3e87677172c20829028435844dfa405f667e840b.scope: Deactivated successfully.
Jan 31 05:15:51 np0005603787 podman[239998]: 2026-01-31 10:15:51.931560039 +0000 UTC m=+0.171072117 container attach 46ec206d2a81f2eee4b9b47c3e87677172c20829028435844dfa405f667e840b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_ishizaka, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 05:15:51 np0005603787 condescending_ishizaka[240015]: 167 167
Jan 31 05:15:51 np0005603787 conmon[240015]: conmon 46ec206d2a81f2eee4b9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-46ec206d2a81f2eee4b9b47c3e87677172c20829028435844dfa405f667e840b.scope/container/memory.events
Jan 31 05:15:51 np0005603787 podman[239998]: 2026-01-31 10:15:51.932820475 +0000 UTC m=+0.172332513 container died 46ec206d2a81f2eee4b9b47c3e87677172c20829028435844dfa405f667e840b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 31 05:15:51 np0005603787 systemd[1]: var-lib-containers-storage-overlay-8e8cecff1b5556bf1d2170af27ba307e8270918eb6b5e4d6baeaa3a264aa0a70-merged.mount: Deactivated successfully.
Jan 31 05:15:51 np0005603787 podman[239998]: 2026-01-31 10:15:51.980588288 +0000 UTC m=+0.220100276 container remove 46ec206d2a81f2eee4b9b47c3e87677172c20829028435844dfa405f667e840b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_ishizaka, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 05:15:51 np0005603787 systemd[1]: libpod-conmon-46ec206d2a81f2eee4b9b47c3e87677172c20829028435844dfa405f667e840b.scope: Deactivated successfully.
Jan 31 05:15:52 np0005603787 podman[240038]: 2026-01-31 10:15:52.14075245 +0000 UTC m=+0.061542350 container create 816d42c9ac764f7c1ebafedc8f733b97a2a8d8292d205fb0b9bc6dbea34325fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_bouman, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:15:52 np0005603787 systemd[1]: Started libpod-conmon-816d42c9ac764f7c1ebafedc8f733b97a2a8d8292d205fb0b9bc6dbea34325fd.scope.
Jan 31 05:15:52 np0005603787 podman[240038]: 2026-01-31 10:15:52.114782264 +0000 UTC m=+0.035572224 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:15:52 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:15:52 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/924461dd910505085ca9f43f42c53cfca4ef9c3afd2b3593584ef4f01a56be18/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:15:52 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/924461dd910505085ca9f43f42c53cfca4ef9c3afd2b3593584ef4f01a56be18/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:15:52 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/924461dd910505085ca9f43f42c53cfca4ef9c3afd2b3593584ef4f01a56be18/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:15:52 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/924461dd910505085ca9f43f42c53cfca4ef9c3afd2b3593584ef4f01a56be18/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:15:52 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/924461dd910505085ca9f43f42c53cfca4ef9c3afd2b3593584ef4f01a56be18/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:15:52 np0005603787 podman[240038]: 2026-01-31 10:15:52.270430097 +0000 UTC m=+0.191220007 container init 816d42c9ac764f7c1ebafedc8f733b97a2a8d8292d205fb0b9bc6dbea34325fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_bouman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:15:52 np0005603787 podman[240038]: 2026-01-31 10:15:52.279115746 +0000 UTC m=+0.199905626 container start 816d42c9ac764f7c1ebafedc8f733b97a2a8d8292d205fb0b9bc6dbea34325fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_bouman, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:15:52 np0005603787 podman[240038]: 2026-01-31 10:15:52.282807012 +0000 UTC m=+0.203596902 container attach 816d42c9ac764f7c1ebafedc8f733b97a2a8d8292d205fb0b9bc6dbea34325fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 05:15:52 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:15:52.516 154765 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:08:49', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ce:80:fe:bf:9d:90'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 05:15:52 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:15:52.517 154765 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 05:15:52 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:15:52.520 154765 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ef41023c-ae05-4c9a-b1cb-d6bd86d05fb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 05:15:52 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:15:52 np0005603787 dreamy_bouman[240055]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:15:52 np0005603787 dreamy_bouman[240055]: --> All data devices are unavailable
Jan 31 05:15:52 np0005603787 systemd[1]: libpod-816d42c9ac764f7c1ebafedc8f733b97a2a8d8292d205fb0b9bc6dbea34325fd.scope: Deactivated successfully.
Jan 31 05:15:52 np0005603787 podman[240038]: 2026-01-31 10:15:52.778762654 +0000 UTC m=+0.699552544 container died 816d42c9ac764f7c1ebafedc8f733b97a2a8d8292d205fb0b9bc6dbea34325fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 05:15:52 np0005603787 systemd[1]: var-lib-containers-storage-overlay-924461dd910505085ca9f43f42c53cfca4ef9c3afd2b3593584ef4f01a56be18-merged.mount: Deactivated successfully.
Jan 31 05:15:52 np0005603787 podman[240038]: 2026-01-31 10:15:52.825385633 +0000 UTC m=+0.746175513 container remove 816d42c9ac764f7c1ebafedc8f733b97a2a8d8292d205fb0b9bc6dbea34325fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_bouman, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:15:52 np0005603787 systemd[1]: libpod-conmon-816d42c9ac764f7c1ebafedc8f733b97a2a8d8292d205fb0b9bc6dbea34325fd.scope: Deactivated successfully.
Jan 31 05:15:53 np0005603787 podman[240148]: 2026-01-31 10:15:53.281566272 +0000 UTC m=+0.037757926 container create 18438c31cc1bfb85da9b4b9180e681775428b333a84a313636a7f95a56200c9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_allen, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 05:15:53 np0005603787 systemd[1]: Started libpod-conmon-18438c31cc1bfb85da9b4b9180e681775428b333a84a313636a7f95a56200c9c.scope.
Jan 31 05:15:53 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:15:53 np0005603787 podman[240148]: 2026-01-31 10:15:53.263345469 +0000 UTC m=+0.019537103 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:15:53 np0005603787 podman[240148]: 2026-01-31 10:15:53.361613822 +0000 UTC m=+0.117805486 container init 18438c31cc1bfb85da9b4b9180e681775428b333a84a313636a7f95a56200c9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_allen, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 05:15:53 np0005603787 podman[240148]: 2026-01-31 10:15:53.370004454 +0000 UTC m=+0.126196078 container start 18438c31cc1bfb85da9b4b9180e681775428b333a84a313636a7f95a56200c9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_allen, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 05:15:53 np0005603787 cool_allen[240165]: 167 167
Jan 31 05:15:53 np0005603787 systemd[1]: libpod-18438c31cc1bfb85da9b4b9180e681775428b333a84a313636a7f95a56200c9c.scope: Deactivated successfully.
Jan 31 05:15:53 np0005603787 conmon[240165]: conmon 18438c31cc1bfb85da9b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-18438c31cc1bfb85da9b4b9180e681775428b333a84a313636a7f95a56200c9c.scope/container/memory.events
Jan 31 05:15:53 np0005603787 podman[240148]: 2026-01-31 10:15:53.377783248 +0000 UTC m=+0.133974872 container attach 18438c31cc1bfb85da9b4b9180e681775428b333a84a313636a7f95a56200c9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_allen, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:15:53 np0005603787 podman[240148]: 2026-01-31 10:15:53.378400395 +0000 UTC m=+0.134592019 container died 18438c31cc1bfb85da9b4b9180e681775428b333a84a313636a7f95a56200c9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_allen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 05:15:53 np0005603787 systemd[1]: var-lib-containers-storage-overlay-fb542670d183bb316a85ab408072e640955eca6c8771733fcdd4e5ca050cce6e-merged.mount: Deactivated successfully.
Jan 31 05:15:53 np0005603787 podman[240148]: 2026-01-31 10:15:53.427648921 +0000 UTC m=+0.183840575 container remove 18438c31cc1bfb85da9b4b9180e681775428b333a84a313636a7f95a56200c9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_allen, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 05:15:53 np0005603787 systemd[1]: libpod-conmon-18438c31cc1bfb85da9b4b9180e681775428b333a84a313636a7f95a56200c9c.scope: Deactivated successfully.
Jan 31 05:15:53 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:15:53 np0005603787 podman[240191]: 2026-01-31 10:15:53.610060512 +0000 UTC m=+0.064301599 container create 24f27030d72973f666e56ef055367d67ddeb38d40876397ba82f294bdcd7f78b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 05:15:53 np0005603787 systemd[1]: Started libpod-conmon-24f27030d72973f666e56ef055367d67ddeb38d40876397ba82f294bdcd7f78b.scope.
Jan 31 05:15:53 np0005603787 podman[240191]: 2026-01-31 10:15:53.58389687 +0000 UTC m=+0.038138037 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:15:53 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:15:53 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b70f50869d1ab6dbd02fa1d13afec06fba55d5bdee2de9e47c6b90835f14d59/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:15:53 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b70f50869d1ab6dbd02fa1d13afec06fba55d5bdee2de9e47c6b90835f14d59/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:15:53 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b70f50869d1ab6dbd02fa1d13afec06fba55d5bdee2de9e47c6b90835f14d59/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:15:53 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b70f50869d1ab6dbd02fa1d13afec06fba55d5bdee2de9e47c6b90835f14d59/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:15:53 np0005603787 podman[240191]: 2026-01-31 10:15:53.698188145 +0000 UTC m=+0.152429262 container init 24f27030d72973f666e56ef055367d67ddeb38d40876397ba82f294bdcd7f78b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_gates, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 05:15:53 np0005603787 podman[240191]: 2026-01-31 10:15:53.710461798 +0000 UTC m=+0.164702885 container start 24f27030d72973f666e56ef055367d67ddeb38d40876397ba82f294bdcd7f78b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_gates, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:15:53 np0005603787 podman[240191]: 2026-01-31 10:15:53.713892456 +0000 UTC m=+0.168133573 container attach 24f27030d72973f666e56ef055367d67ddeb38d40876397ba82f294bdcd7f78b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_gates, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]: {
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:    "0": [
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:        {
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:            "devices": [
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "/dev/loop3"
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:            ],
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:            "lv_name": "ceph_lv0",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:            "lv_size": "21470642176",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:            "name": "ceph_lv0",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:            "tags": {
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "ceph.cluster_name": "ceph",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "ceph.crush_device_class": "",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "ceph.encrypted": "0",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "ceph.objectstore": "bluestore",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "ceph.osd_id": "0",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "ceph.type": "block",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "ceph.vdo": "0",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "ceph.with_tpm": "0"
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:            },
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:            "type": "block",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:            "vg_name": "ceph_vg0"
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:        }
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:    ],
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:    "1": [
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:        {
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:            "devices": [
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "/dev/loop4"
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:            ],
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:            "lv_name": "ceph_lv1",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:            "lv_size": "21470642176",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:            "name": "ceph_lv1",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:            "tags": {
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "ceph.cluster_name": "ceph",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "ceph.crush_device_class": "",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "ceph.encrypted": "0",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "ceph.objectstore": "bluestore",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "ceph.osd_id": "1",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "ceph.type": "block",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "ceph.vdo": "0",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "ceph.with_tpm": "0"
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:            },
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:            "type": "block",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:            "vg_name": "ceph_vg1"
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:        }
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:    ],
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:    "2": [
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:        {
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:            "devices": [
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "/dev/loop5"
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:            ],
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:            "lv_name": "ceph_lv2",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:            "lv_size": "21470642176",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:            "name": "ceph_lv2",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:            "tags": {
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "ceph.cluster_name": "ceph",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "ceph.crush_device_class": "",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "ceph.encrypted": "0",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "ceph.objectstore": "bluestore",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "ceph.osd_id": "2",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "ceph.type": "block",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "ceph.vdo": "0",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:                "ceph.with_tpm": "0"
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:            },
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:            "type": "block",
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:            "vg_name": "ceph_vg2"
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:        }
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]:    ]
Jan 31 05:15:53 np0005603787 inspiring_gates[240207]: }
Jan 31 05:15:54 np0005603787 systemd[1]: libpod-24f27030d72973f666e56ef055367d67ddeb38d40876397ba82f294bdcd7f78b.scope: Deactivated successfully.
Jan 31 05:15:54 np0005603787 podman[240191]: 2026-01-31 10:15:54.024478351 +0000 UTC m=+0.478719448 container died 24f27030d72973f666e56ef055367d67ddeb38d40876397ba82f294bdcd7f78b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_gates, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 05:15:54 np0005603787 systemd[1]: var-lib-containers-storage-overlay-3b70f50869d1ab6dbd02fa1d13afec06fba55d5bdee2de9e47c6b90835f14d59-merged.mount: Deactivated successfully.
Jan 31 05:15:54 np0005603787 podman[240191]: 2026-01-31 10:15:54.071297346 +0000 UTC m=+0.525538463 container remove 24f27030d72973f666e56ef055367d67ddeb38d40876397ba82f294bdcd7f78b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_gates, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:15:54 np0005603787 systemd[1]: libpod-conmon-24f27030d72973f666e56ef055367d67ddeb38d40876397ba82f294bdcd7f78b.scope: Deactivated successfully.
Jan 31 05:15:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:15:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:15:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:15:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:15:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:15:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:15:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:15:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:15:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:15:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:15:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:15:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:15:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1786947556520692e-06 of space, bias 4.0, pg target 0.0014144337067824831 quantized to 16 (current 16)
Jan 31 05:15:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:15:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:15:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:15:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:15:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:15:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 05:15:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:15:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:15:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:15:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:15:54 np0005603787 podman[240291]: 2026-01-31 10:15:54.541902709 +0000 UTC m=+0.056070112 container create fbc458d1098f025ca15d410ba83e526a624ed64530da7d175119553a3cdc3530 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_lamarr, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:15:54 np0005603787 systemd[1]: Started libpod-conmon-fbc458d1098f025ca15d410ba83e526a624ed64530da7d175119553a3cdc3530.scope.
Jan 31 05:15:54 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:15:54 np0005603787 podman[240291]: 2026-01-31 10:15:54.519101934 +0000 UTC m=+0.033269417 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:15:54 np0005603787 podman[240291]: 2026-01-31 10:15:54.623787403 +0000 UTC m=+0.137954806 container init fbc458d1098f025ca15d410ba83e526a624ed64530da7d175119553a3cdc3530 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:15:54 np0005603787 podman[240291]: 2026-01-31 10:15:54.630853916 +0000 UTC m=+0.145021319 container start fbc458d1098f025ca15d410ba83e526a624ed64530da7d175119553a3cdc3530 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_lamarr, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 05:15:54 np0005603787 podman[240291]: 2026-01-31 10:15:54.634376926 +0000 UTC m=+0.148544329 container attach fbc458d1098f025ca15d410ba83e526a624ed64530da7d175119553a3cdc3530 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_lamarr, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:15:54 np0005603787 systemd[1]: libpod-fbc458d1098f025ca15d410ba83e526a624ed64530da7d175119553a3cdc3530.scope: Deactivated successfully.
Jan 31 05:15:54 np0005603787 stupefied_lamarr[240308]: 167 167
Jan 31 05:15:54 np0005603787 conmon[240308]: conmon fbc458d1098f025ca15d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fbc458d1098f025ca15d410ba83e526a624ed64530da7d175119553a3cdc3530.scope/container/memory.events
Jan 31 05:15:54 np0005603787 podman[240291]: 2026-01-31 10:15:54.637038003 +0000 UTC m=+0.151205406 container died fbc458d1098f025ca15d410ba83e526a624ed64530da7d175119553a3cdc3530 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 05:15:54 np0005603787 systemd[1]: var-lib-containers-storage-overlay-3cdc4b3fca54b131d0ad08ec81557dd681c9ddac5d260c01083fddc9d46a7915-merged.mount: Deactivated successfully.
Jan 31 05:15:54 np0005603787 podman[240291]: 2026-01-31 10:15:54.672024778 +0000 UTC m=+0.186192201 container remove fbc458d1098f025ca15d410ba83e526a624ed64530da7d175119553a3cdc3530 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:15:54 np0005603787 systemd[1]: libpod-conmon-fbc458d1098f025ca15d410ba83e526a624ed64530da7d175119553a3cdc3530.scope: Deactivated successfully.
Jan 31 05:15:54 np0005603787 podman[240332]: 2026-01-31 10:15:54.825844609 +0000 UTC m=+0.038897239 container create f2c9625363a5c9b54f51b073b89e966a956d74be8799a9c9e716a649cf89f9dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_maxwell, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:15:54 np0005603787 systemd[1]: Started libpod-conmon-f2c9625363a5c9b54f51b073b89e966a956d74be8799a9c9e716a649cf89f9dd.scope.
Jan 31 05:15:54 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:15:54 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b60b7611caa9fe1c10c06b142759d0bd4b4ed1c489ba55b543afeab2bbd665f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:15:54 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b60b7611caa9fe1c10c06b142759d0bd4b4ed1c489ba55b543afeab2bbd665f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:15:54 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b60b7611caa9fe1c10c06b142759d0bd4b4ed1c489ba55b543afeab2bbd665f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:15:54 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b60b7611caa9fe1c10c06b142759d0bd4b4ed1c489ba55b543afeab2bbd665f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:15:54 np0005603787 podman[240332]: 2026-01-31 10:15:54.810162588 +0000 UTC m=+0.023215228 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:15:54 np0005603787 podman[240332]: 2026-01-31 10:15:54.923529146 +0000 UTC m=+0.136581796 container init f2c9625363a5c9b54f51b073b89e966a956d74be8799a9c9e716a649cf89f9dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_maxwell, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 05:15:54 np0005603787 podman[240332]: 2026-01-31 10:15:54.929505597 +0000 UTC m=+0.142558217 container start f2c9625363a5c9b54f51b073b89e966a956d74be8799a9c9e716a649cf89f9dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:15:54 np0005603787 podman[240332]: 2026-01-31 10:15:54.933182703 +0000 UTC m=+0.146235323 container attach f2c9625363a5c9b54f51b073b89e966a956d74be8799a9c9e716a649cf89f9dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_maxwell, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:15:55 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:15:55 np0005603787 lvm[240427]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:15:55 np0005603787 lvm[240426]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:15:55 np0005603787 lvm[240427]: VG ceph_vg1 finished
Jan 31 05:15:55 np0005603787 lvm[240426]: VG ceph_vg0 finished
Jan 31 05:15:55 np0005603787 lvm[240429]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:15:55 np0005603787 lvm[240429]: VG ceph_vg2 finished
Jan 31 05:15:55 np0005603787 magical_maxwell[240348]: {}
Jan 31 05:15:55 np0005603787 systemd[1]: libpod-f2c9625363a5c9b54f51b073b89e966a956d74be8799a9c9e716a649cf89f9dd.scope: Deactivated successfully.
Jan 31 05:15:55 np0005603787 systemd[1]: libpod-f2c9625363a5c9b54f51b073b89e966a956d74be8799a9c9e716a649cf89f9dd.scope: Consumed 1.155s CPU time.
Jan 31 05:15:55 np0005603787 podman[240332]: 2026-01-31 10:15:55.71882526 +0000 UTC m=+0.931877940 container died f2c9625363a5c9b54f51b073b89e966a956d74be8799a9c9e716a649cf89f9dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_maxwell, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:15:55 np0005603787 systemd[1]: var-lib-containers-storage-overlay-4b60b7611caa9fe1c10c06b142759d0bd4b4ed1c489ba55b543afeab2bbd665f-merged.mount: Deactivated successfully.
Jan 31 05:15:55 np0005603787 podman[240332]: 2026-01-31 10:15:55.762403512 +0000 UTC m=+0.975456142 container remove f2c9625363a5c9b54f51b073b89e966a956d74be8799a9c9e716a649cf89f9dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_maxwell, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 05:15:55 np0005603787 systemd[1]: libpod-conmon-f2c9625363a5c9b54f51b073b89e966a956d74be8799a9c9e716a649cf89f9dd.scope: Deactivated successfully.
Jan 31 05:15:55 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:15:55 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:15:55 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:15:55 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:15:56 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:15:56 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:15:57 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:15:57 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:15:58 np0005603787 nova_compute[238603]: 2026-01-31 10:15:58.575 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:15:58 np0005603787 nova_compute[238603]: 2026-01-31 10:15:58.576 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:15:58 np0005603787 nova_compute[238603]: 2026-01-31 10:15:58.597 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:15:58 np0005603787 nova_compute[238603]: 2026-01-31 10:15:58.597 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 05:15:58 np0005603787 nova_compute[238603]: 2026-01-31 10:15:58.597 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 05:15:58 np0005603787 nova_compute[238603]: 2026-01-31 10:15:58.611 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 05:15:58 np0005603787 nova_compute[238603]: 2026-01-31 10:15:58.611 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:15:58 np0005603787 nova_compute[238603]: 2026-01-31 10:15:58.612 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:15:58 np0005603787 nova_compute[238603]: 2026-01-31 10:15:58.612 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:15:58 np0005603787 nova_compute[238603]: 2026-01-31 10:15:58.612 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 05:15:59 np0005603787 nova_compute[238603]: 2026-01-31 10:15:59.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:15:59 np0005603787 nova_compute[238603]: 2026-01-31 10:15:59.103 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:15:59 np0005603787 nova_compute[238603]: 2026-01-31 10:15:59.103 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:15:59 np0005603787 nova_compute[238603]: 2026-01-31 10:15:59.104 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:15:59 np0005603787 nova_compute[238603]: 2026-01-31 10:15:59.127 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:15:59 np0005603787 nova_compute[238603]: 2026-01-31 10:15:59.128 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:15:59 np0005603787 nova_compute[238603]: 2026-01-31 10:15:59.128 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:15:59 np0005603787 nova_compute[238603]: 2026-01-31 10:15:59.128 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 05:15:59 np0005603787 nova_compute[238603]: 2026-01-31 10:15:59.129 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:15:59 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:15:59 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:15:59 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3187097186' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:15:59 np0005603787 nova_compute[238603]: 2026-01-31 10:15:59.677 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:15:59 np0005603787 nova_compute[238603]: 2026-01-31 10:15:59.828 238607 WARNING nova.virt.libvirt.driver [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 05:15:59 np0005603787 nova_compute[238603]: 2026-01-31 10:15:59.829 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5108MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 05:15:59 np0005603787 nova_compute[238603]: 2026-01-31 10:15:59.830 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:15:59 np0005603787 nova_compute[238603]: 2026-01-31 10:15:59.830 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:15:59 np0005603787 nova_compute[238603]: 2026-01-31 10:15:59.916 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 05:15:59 np0005603787 nova_compute[238603]: 2026-01-31 10:15:59.916 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 05:15:59 np0005603787 nova_compute[238603]: 2026-01-31 10:15:59.935 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:16:00 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:16:00 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4004699617' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:16:00 np0005603787 nova_compute[238603]: 2026-01-31 10:16:00.452 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:16:00 np0005603787 nova_compute[238603]: 2026-01-31 10:16:00.458 238607 DEBUG nova.compute.provider_tree [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed in ProviderTree for provider: 207962d2-1ba9-4db2-8533-2a30e7131f3e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 05:16:00 np0005603787 nova_compute[238603]: 2026-01-31 10:16:00.471 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed for provider 207962d2-1ba9-4db2-8533-2a30e7131f3e based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 05:16:00 np0005603787 nova_compute[238603]: 2026-01-31 10:16:00.472 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 05:16:00 np0005603787 nova_compute[238603]: 2026-01-31 10:16:00.472 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:16:01 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:16:02 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:16:03 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:16:05 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:16:07 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:16:07 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:16:09 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:16:11 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:16:12 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:16:13 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:16:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:16:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:16:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:16:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:16:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:16:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:16:15 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:16:15 np0005603787 podman[240516]: 2026-01-31 10:16:15.873993935 +0000 UTC m=+0.086514174 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 31 05:16:15 np0005603787 podman[240515]: 2026-01-31 10:16:15.875211479 +0000 UTC m=+0.089492867 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 05:16:17 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:16:17 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:16:19 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:16:21 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:16:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 05:16:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2664530146' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 05:16:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 05:16:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2664530146' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 05:16:22 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:16:23 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:16:25 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:16:27 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:16:27 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:16:29 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:16:31 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:16:32 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:16:33 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:16:35 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:16:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:16:37.056 154765 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:16:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:16:37.057 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:16:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:16:37.057 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:16:37 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:16:37 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:16:37 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Jan 31 05:16:37 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:16:37.688422) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 05:16:37 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Jan 31 05:16:37 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769854597688479, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1327, "num_deletes": 505, "total_data_size": 1556415, "memory_usage": 1579872, "flush_reason": "Manual Compaction"}
Jan 31 05:16:37 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Jan 31 05:16:37 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769854597700107, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1541029, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13588, "largest_seqno": 14914, "table_properties": {"data_size": 1535224, "index_size": 2625, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 14772, "raw_average_key_size": 18, "raw_value_size": 1521634, "raw_average_value_size": 1855, "num_data_blocks": 120, "num_entries": 820, "num_filter_entries": 820, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769854492, "oldest_key_time": 1769854492, "file_creation_time": 1769854597, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a9067dea-12fb-43c2-8d5c-dbf66227f0e8", "db_session_id": "EXKALWXQ4I64EWVUMKE5", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:16:37 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 11757 microseconds, and 3942 cpu microseconds.
Jan 31 05:16:37 np0005603787 ceph-mon[75160]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 05:16:37 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:16:37.700174) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1541029 bytes OK
Jan 31 05:16:37 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:16:37.700204) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Jan 31 05:16:37 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:16:37.702022) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Jan 31 05:16:37 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:16:37.702039) EVENT_LOG_v1 {"time_micros": 1769854597702034, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 05:16:37 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:16:37.702067) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 05:16:37 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1549405, prev total WAL file size 1549405, number of live WAL files 2.
Jan 31 05:16:37 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:16:37 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:16:37.702579) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323531' seq:0, type:0; will stop at (end)
Jan 31 05:16:37 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 05:16:37 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1504KB)], [32(7610KB)]
Jan 31 05:16:37 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769854597702635, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9333737, "oldest_snapshot_seqno": -1}
Jan 31 05:16:37 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3859 keys, 7353820 bytes, temperature: kUnknown
Jan 31 05:16:37 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769854597737691, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7353820, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7326263, "index_size": 16823, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9669, "raw_key_size": 94549, "raw_average_key_size": 24, "raw_value_size": 7254656, "raw_average_value_size": 1879, "num_data_blocks": 711, "num_entries": 3859, "num_filter_entries": 3859, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769853439, "oldest_key_time": 0, "file_creation_time": 1769854597, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a9067dea-12fb-43c2-8d5c-dbf66227f0e8", "db_session_id": "EXKALWXQ4I64EWVUMKE5", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:16:37 np0005603787 ceph-mon[75160]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 05:16:37 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:16:37.738047) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7353820 bytes
Jan 31 05:16:37 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:16:37.739598) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 265.4 rd, 209.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 7.4 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(10.8) write-amplify(4.8) OK, records in: 4882, records dropped: 1023 output_compression: NoCompression
Jan 31 05:16:37 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:16:37.739629) EVENT_LOG_v1 {"time_micros": 1769854597739614, "job": 14, "event": "compaction_finished", "compaction_time_micros": 35164, "compaction_time_cpu_micros": 12798, "output_level": 6, "num_output_files": 1, "total_output_size": 7353820, "num_input_records": 4882, "num_output_records": 3859, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 05:16:37 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:16:37 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769854597740038, "job": 14, "event": "table_file_deletion", "file_number": 34}
Jan 31 05:16:37 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:16:37 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769854597741450, "job": 14, "event": "table_file_deletion", "file_number": 32}
Jan 31 05:16:37 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:16:37.702485) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:16:37 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:16:37.741523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:16:37 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:16:37.741532) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:16:37 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:16:37.741534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:16:37 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:16:37.741536) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:16:37 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:16:37.741538) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:16:39 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:16:41 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:16:42 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:16:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_10:16:43
Jan 31 05:16:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:16:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] do_upmap
Jan 31 05:16:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', 'images', 'default.rgw.meta', 'backups']
Jan 31 05:16:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:16:43 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:16:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:16:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:16:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:16:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:16:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:16:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:16:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:16:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:16:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:16:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:16:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:16:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:16:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:16:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:16:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:16:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:16:45 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:16:46 np0005603787 podman[240562]: 2026-01-31 10:16:46.856911556 +0000 UTC m=+0.066718607 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Jan 31 05:16:46 np0005603787 podman[240561]: 2026-01-31 10:16:46.869783918 +0000 UTC m=+0.084121547 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 31 05:16:47 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:16:47 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:16:49 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:16:51 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:16:52 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:16:53 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:16:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:16:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:16:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:16:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:16:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:16:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:16:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:16:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:16:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:16:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:16:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:16:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:16:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1786947556520692e-06 of space, bias 4.0, pg target 0.0014144337067824831 quantized to 16 (current 16)
Jan 31 05:16:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:16:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:16:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:16:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:16:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:16:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 05:16:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:16:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:16:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:16:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:16:55 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:16:56 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:16:56 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:16:56 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:16:56 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:16:56 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:16:56 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:16:56 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:16:56 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:16:56 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:16:56 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:16:56 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:16:56 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:16:57 np0005603787 podman[240752]: 2026-01-31 10:16:57.029286546 +0000 UTC m=+0.054743992 container create 4c46a82b8b8a6a4d84db82686b75150bda40bbcd1178e56cdc9bf7194dbeaf2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 05:16:57 np0005603787 systemd[1]: Started libpod-conmon-4c46a82b8b8a6a4d84db82686b75150bda40bbcd1178e56cdc9bf7194dbeaf2a.scope.
Jan 31 05:16:57 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:16:57 np0005603787 podman[240752]: 2026-01-31 10:16:57.010350773 +0000 UTC m=+0.035808259 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:16:57 np0005603787 podman[240752]: 2026-01-31 10:16:57.111647731 +0000 UTC m=+0.137105207 container init 4c46a82b8b8a6a4d84db82686b75150bda40bbcd1178e56cdc9bf7194dbeaf2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_dijkstra, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:16:57 np0005603787 podman[240752]: 2026-01-31 10:16:57.119932624 +0000 UTC m=+0.145390070 container start 4c46a82b8b8a6a4d84db82686b75150bda40bbcd1178e56cdc9bf7194dbeaf2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_dijkstra, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 05:16:57 np0005603787 podman[240752]: 2026-01-31 10:16:57.123513255 +0000 UTC m=+0.148970731 container attach 4c46a82b8b8a6a4d84db82686b75150bda40bbcd1178e56cdc9bf7194dbeaf2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:16:57 np0005603787 charming_dijkstra[240768]: 167 167
Jan 31 05:16:57 np0005603787 systemd[1]: libpod-4c46a82b8b8a6a4d84db82686b75150bda40bbcd1178e56cdc9bf7194dbeaf2a.scope: Deactivated successfully.
Jan 31 05:16:57 np0005603787 podman[240752]: 2026-01-31 10:16:57.126852799 +0000 UTC m=+0.152310245 container died 4c46a82b8b8a6a4d84db82686b75150bda40bbcd1178e56cdc9bf7194dbeaf2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Jan 31 05:16:57 np0005603787 systemd[1]: var-lib-containers-storage-overlay-4993ce5be66d348f9fbd52a9dd427f7c2040292f46c0af5aafab7ab5b4c64430-merged.mount: Deactivated successfully.
Jan 31 05:16:57 np0005603787 podman[240752]: 2026-01-31 10:16:57.168311115 +0000 UTC m=+0.193768551 container remove 4c46a82b8b8a6a4d84db82686b75150bda40bbcd1178e56cdc9bf7194dbeaf2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_dijkstra, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 05:16:57 np0005603787 systemd[1]: libpod-conmon-4c46a82b8b8a6a4d84db82686b75150bda40bbcd1178e56cdc9bf7194dbeaf2a.scope: Deactivated successfully.
Jan 31 05:16:57 np0005603787 podman[240791]: 2026-01-31 10:16:57.300435391 +0000 UTC m=+0.038111363 container create 0471c00158c58e8ec92a5e01db5e27116cd48ec076dbfc56baf0a1d6d001bfb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:16:57 np0005603787 systemd[1]: Started libpod-conmon-0471c00158c58e8ec92a5e01db5e27116cd48ec076dbfc56baf0a1d6d001bfb0.scope.
Jan 31 05:16:57 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:16:57 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbcda38f107ce7b34fa881a0f054455e10ae109f927cd55297cb9675f42819d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:16:57 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbcda38f107ce7b34fa881a0f054455e10ae109f927cd55297cb9675f42819d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:16:57 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbcda38f107ce7b34fa881a0f054455e10ae109f927cd55297cb9675f42819d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:16:57 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbcda38f107ce7b34fa881a0f054455e10ae109f927cd55297cb9675f42819d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:16:57 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbcda38f107ce7b34fa881a0f054455e10ae109f927cd55297cb9675f42819d9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:16:57 np0005603787 podman[240791]: 2026-01-31 10:16:57.38393728 +0000 UTC m=+0.121613342 container init 0471c00158c58e8ec92a5e01db5e27116cd48ec076dbfc56baf0a1d6d001bfb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_lichterman, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:16:57 np0005603787 podman[240791]: 2026-01-31 10:16:57.287499297 +0000 UTC m=+0.025175309 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:16:57 np0005603787 podman[240791]: 2026-01-31 10:16:57.391740898 +0000 UTC m=+0.129416920 container start 0471c00158c58e8ec92a5e01db5e27116cd48ec076dbfc56baf0a1d6d001bfb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_lichterman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:16:57 np0005603787 podman[240791]: 2026-01-31 10:16:57.395548636 +0000 UTC m=+0.133224668 container attach 0471c00158c58e8ec92a5e01db5e27116cd48ec076dbfc56baf0a1d6d001bfb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 05:16:57 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:16:57 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:16:57 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:16:57 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:16:57 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:16:57 np0005603787 focused_lichterman[240807]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:16:57 np0005603787 focused_lichterman[240807]: --> All data devices are unavailable
Jan 31 05:16:57 np0005603787 systemd[1]: libpod-0471c00158c58e8ec92a5e01db5e27116cd48ec076dbfc56baf0a1d6d001bfb0.scope: Deactivated successfully.
Jan 31 05:16:57 np0005603787 podman[240828]: 2026-01-31 10:16:57.841581 +0000 UTC m=+0.027914146 container died 0471c00158c58e8ec92a5e01db5e27116cd48ec076dbfc56baf0a1d6d001bfb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_lichterman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 05:16:57 np0005603787 systemd[1]: var-lib-containers-storage-overlay-fbcda38f107ce7b34fa881a0f054455e10ae109f927cd55297cb9675f42819d9-merged.mount: Deactivated successfully.
Jan 31 05:16:57 np0005603787 podman[240828]: 2026-01-31 10:16:57.877637254 +0000 UTC m=+0.063970400 container remove 0471c00158c58e8ec92a5e01db5e27116cd48ec076dbfc56baf0a1d6d001bfb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:16:57 np0005603787 systemd[1]: libpod-conmon-0471c00158c58e8ec92a5e01db5e27116cd48ec076dbfc56baf0a1d6d001bfb0.scope: Deactivated successfully.
Jan 31 05:16:58 np0005603787 podman[240903]: 2026-01-31 10:16:58.297816772 +0000 UTC m=+0.033577025 container create 75cd705aa3956230abcb3c76be5a4023cfd9dc768da44f1701c4429427ba7b32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 05:16:58 np0005603787 systemd[1]: Started libpod-conmon-75cd705aa3956230abcb3c76be5a4023cfd9dc768da44f1701c4429427ba7b32.scope.
Jan 31 05:16:58 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:16:58 np0005603787 podman[240903]: 2026-01-31 10:16:58.362218453 +0000 UTC m=+0.097978746 container init 75cd705aa3956230abcb3c76be5a4023cfd9dc768da44f1701c4429427ba7b32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:16:58 np0005603787 podman[240903]: 2026-01-31 10:16:58.371191945 +0000 UTC m=+0.106952238 container start 75cd705aa3956230abcb3c76be5a4023cfd9dc768da44f1701c4429427ba7b32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:16:58 np0005603787 systemd[1]: libpod-75cd705aa3956230abcb3c76be5a4023cfd9dc768da44f1701c4429427ba7b32.scope: Deactivated successfully.
Jan 31 05:16:58 np0005603787 gracious_blackwell[240920]: 167 167
Jan 31 05:16:58 np0005603787 conmon[240920]: conmon 75cd705aa3956230abcb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-75cd705aa3956230abcb3c76be5a4023cfd9dc768da44f1701c4429427ba7b32.scope/container/memory.events
Jan 31 05:16:58 np0005603787 podman[240903]: 2026-01-31 10:16:58.37600003 +0000 UTC m=+0.111760313 container attach 75cd705aa3956230abcb3c76be5a4023cfd9dc768da44f1701c4429427ba7b32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_blackwell, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:16:58 np0005603787 podman[240903]: 2026-01-31 10:16:58.376376051 +0000 UTC m=+0.112136334 container died 75cd705aa3956230abcb3c76be5a4023cfd9dc768da44f1701c4429427ba7b32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_blackwell, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:16:58 np0005603787 podman[240903]: 2026-01-31 10:16:58.283034946 +0000 UTC m=+0.018795229 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:16:58 np0005603787 systemd[1]: var-lib-containers-storage-overlay-89b3f461d2977f914d764c441925309b78eb0dc398cf2336f5ca1be53df031b5-merged.mount: Deactivated successfully.
Jan 31 05:16:58 np0005603787 podman[240903]: 2026-01-31 10:16:58.418881916 +0000 UTC m=+0.154642179 container remove 75cd705aa3956230abcb3c76be5a4023cfd9dc768da44f1701c4429427ba7b32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_blackwell, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 05:16:58 np0005603787 systemd[1]: libpod-conmon-75cd705aa3956230abcb3c76be5a4023cfd9dc768da44f1701c4429427ba7b32.scope: Deactivated successfully.
Jan 31 05:16:58 np0005603787 podman[240945]: 2026-01-31 10:16:58.593812596 +0000 UTC m=+0.051802128 container create 310aa6af2e13f5fea69a6232d0c82ab8d8ba96b3fcdbe5536e5bbdb83c52d6e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 05:16:58 np0005603787 systemd[1]: Started libpod-conmon-310aa6af2e13f5fea69a6232d0c82ab8d8ba96b3fcdbe5536e5bbdb83c52d6e6.scope.
Jan 31 05:16:58 np0005603787 podman[240945]: 2026-01-31 10:16:58.57332967 +0000 UTC m=+0.031319182 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:16:58 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:16:58 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d7e24048064d7acdda61e2faf6e6acdcb3b1bcadae44cfc6d2b3f8d9e7b2a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:16:58 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d7e24048064d7acdda61e2faf6e6acdcb3b1bcadae44cfc6d2b3f8d9e7b2a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:16:58 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d7e24048064d7acdda61e2faf6e6acdcb3b1bcadae44cfc6d2b3f8d9e7b2a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:16:58 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d7e24048064d7acdda61e2faf6e6acdcb3b1bcadae44cfc6d2b3f8d9e7b2a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:16:58 np0005603787 podman[240945]: 2026-01-31 10:16:58.6992123 +0000 UTC m=+0.157201822 container init 310aa6af2e13f5fea69a6232d0c82ab8d8ba96b3fcdbe5536e5bbdb83c52d6e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_haibt, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:16:58 np0005603787 podman[240945]: 2026-01-31 10:16:58.711236939 +0000 UTC m=+0.169226431 container start 310aa6af2e13f5fea69a6232d0c82ab8d8ba96b3fcdbe5536e5bbdb83c52d6e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:16:58 np0005603787 podman[240945]: 2026-01-31 10:16:58.715162909 +0000 UTC m=+0.173152501 container attach 310aa6af2e13f5fea69a6232d0c82ab8d8ba96b3fcdbe5536e5bbdb83c52d6e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]: {
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:    "0": [
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:        {
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:            "devices": [
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "/dev/loop3"
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:            ],
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:            "lv_name": "ceph_lv0",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:            "lv_size": "21470642176",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:            "name": "ceph_lv0",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:            "tags": {
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "ceph.cluster_name": "ceph",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "ceph.crush_device_class": "",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "ceph.encrypted": "0",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "ceph.objectstore": "bluestore",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "ceph.osd_id": "0",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "ceph.type": "block",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "ceph.vdo": "0",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "ceph.with_tpm": "0"
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:            },
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:            "type": "block",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:            "vg_name": "ceph_vg0"
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:        }
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:    ],
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:    "1": [
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:        {
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:            "devices": [
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "/dev/loop4"
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:            ],
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:            "lv_name": "ceph_lv1",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:            "lv_size": "21470642176",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:            "name": "ceph_lv1",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:            "tags": {
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "ceph.cluster_name": "ceph",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "ceph.crush_device_class": "",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "ceph.encrypted": "0",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "ceph.objectstore": "bluestore",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "ceph.osd_id": "1",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "ceph.type": "block",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "ceph.vdo": "0",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "ceph.with_tpm": "0"
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:            },
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:            "type": "block",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:            "vg_name": "ceph_vg1"
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:        }
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:    ],
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:    "2": [
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:        {
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:            "devices": [
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "/dev/loop5"
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:            ],
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:            "lv_name": "ceph_lv2",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:            "lv_size": "21470642176",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:            "name": "ceph_lv2",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:            "tags": {
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "ceph.cluster_name": "ceph",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "ceph.crush_device_class": "",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "ceph.encrypted": "0",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "ceph.objectstore": "bluestore",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "ceph.osd_id": "2",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "ceph.type": "block",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "ceph.vdo": "0",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:                "ceph.with_tpm": "0"
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:            },
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:            "type": "block",
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:            "vg_name": "ceph_vg2"
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:        }
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]:    ]
Jan 31 05:16:58 np0005603787 frosty_haibt[240962]: }
Jan 31 05:16:58 np0005603787 systemd[1]: libpod-310aa6af2e13f5fea69a6232d0c82ab8d8ba96b3fcdbe5536e5bbdb83c52d6e6.scope: Deactivated successfully.
Jan 31 05:16:58 np0005603787 podman[240945]: 2026-01-31 10:16:58.957848504 +0000 UTC m=+0.415838096 container died 310aa6af2e13f5fea69a6232d0c82ab8d8ba96b3fcdbe5536e5bbdb83c52d6e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 05:16:58 np0005603787 systemd[1]: var-lib-containers-storage-overlay-62d7e24048064d7acdda61e2faf6e6acdcb3b1bcadae44cfc6d2b3f8d9e7b2a2-merged.mount: Deactivated successfully.
Jan 31 05:16:59 np0005603787 podman[240945]: 2026-01-31 10:16:59.000593056 +0000 UTC m=+0.458582568 container remove 310aa6af2e13f5fea69a6232d0c82ab8d8ba96b3fcdbe5536e5bbdb83c52d6e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_haibt, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:16:59 np0005603787 systemd[1]: libpod-conmon-310aa6af2e13f5fea69a6232d0c82ab8d8ba96b3fcdbe5536e5bbdb83c52d6e6.scope: Deactivated successfully.
Jan 31 05:16:59 np0005603787 podman[241048]: 2026-01-31 10:16:59.432812672 +0000 UTC m=+0.050551433 container create 58ffb5804ea2745904ade4dddc8d55c2f573ab4e853b7095a142772f39098323 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 05:16:59 np0005603787 systemd[1]: Started libpod-conmon-58ffb5804ea2745904ade4dddc8d55c2f573ab4e853b7095a142772f39098323.scope.
Jan 31 05:16:59 np0005603787 nova_compute[238603]: 2026-01-31 10:16:59.470 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:16:59 np0005603787 nova_compute[238603]: 2026-01-31 10:16:59.471 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:16:59 np0005603787 nova_compute[238603]: 2026-01-31 10:16:59.471 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:16:59 np0005603787 nova_compute[238603]: 2026-01-31 10:16:59.472 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 05:16:59 np0005603787 nova_compute[238603]: 2026-01-31 10:16:59.472 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:16:59 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:16:59 np0005603787 podman[241048]: 2026-01-31 10:16:59.487058638 +0000 UTC m=+0.104797379 container init 58ffb5804ea2745904ade4dddc8d55c2f573ab4e853b7095a142772f39098323 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_feistel, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:16:59 np0005603787 podman[241048]: 2026-01-31 10:16:59.492928833 +0000 UTC m=+0.110667604 container start 58ffb5804ea2745904ade4dddc8d55c2f573ab4e853b7095a142772f39098323 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_feistel, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 05:16:59 np0005603787 systemd[1]: libpod-58ffb5804ea2745904ade4dddc8d55c2f573ab4e853b7095a142772f39098323.scope: Deactivated successfully.
Jan 31 05:16:59 np0005603787 charming_feistel[241064]: 167 167
Jan 31 05:16:59 np0005603787 conmon[241064]: conmon 58ffb5804ea2745904ad <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-58ffb5804ea2745904ade4dddc8d55c2f573ab4e853b7095a142772f39098323.scope/container/memory.events
Jan 31 05:16:59 np0005603787 podman[241048]: 2026-01-31 10:16:59.497021318 +0000 UTC m=+0.114760079 container attach 58ffb5804ea2745904ade4dddc8d55c2f573ab4e853b7095a142772f39098323 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_feistel, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 05:16:59 np0005603787 nova_compute[238603]: 2026-01-31 10:16:59.496 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:16:59 np0005603787 nova_compute[238603]: 2026-01-31 10:16:59.497 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:16:59 np0005603787 nova_compute[238603]: 2026-01-31 10:16:59.497 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:16:59 np0005603787 podman[241048]: 2026-01-31 10:16:59.497856502 +0000 UTC m=+0.115595243 container died 58ffb5804ea2745904ade4dddc8d55c2f573ab4e853b7095a142772f39098323 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_feistel, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 05:16:59 np0005603787 podman[241048]: 2026-01-31 10:16:59.406180363 +0000 UTC m=+0.023919194 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:16:59 np0005603787 nova_compute[238603]: 2026-01-31 10:16:59.497 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 05:16:59 np0005603787 nova_compute[238603]: 2026-01-31 10:16:59.498 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:16:59 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:16:59 np0005603787 systemd[1]: var-lib-containers-storage-overlay-0057cb8c5b4b455e955f2377ddb1217ffcdc03b3b3740d757f3afb0ef3771ee0-merged.mount: Deactivated successfully.
Jan 31 05:16:59 np0005603787 podman[241048]: 2026-01-31 10:16:59.530537781 +0000 UTC m=+0.148276522 container remove 58ffb5804ea2745904ade4dddc8d55c2f573ab4e853b7095a142772f39098323 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_feistel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 05:16:59 np0005603787 systemd[1]: libpod-conmon-58ffb5804ea2745904ade4dddc8d55c2f573ab4e853b7095a142772f39098323.scope: Deactivated successfully.
Jan 31 05:16:59 np0005603787 podman[241107]: 2026-01-31 10:16:59.686686052 +0000 UTC m=+0.062957842 container create 66a5e3e8b32eb94518df5336e0aa264cf886b65946b274b6b8c1f9f6d936de94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_elbakyan, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:16:59 np0005603787 systemd[1]: Started libpod-conmon-66a5e3e8b32eb94518df5336e0aa264cf886b65946b274b6b8c1f9f6d936de94.scope.
Jan 31 05:16:59 np0005603787 podman[241107]: 2026-01-31 10:16:59.659691583 +0000 UTC m=+0.035963403 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:16:59 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:16:59 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/851cdf36e1ee631ef37c040e5aabb5cd7b004389cc1f1ccac83a4abe291b869f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:16:59 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/851cdf36e1ee631ef37c040e5aabb5cd7b004389cc1f1ccac83a4abe291b869f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:16:59 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/851cdf36e1ee631ef37c040e5aabb5cd7b004389cc1f1ccac83a4abe291b869f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:16:59 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/851cdf36e1ee631ef37c040e5aabb5cd7b004389cc1f1ccac83a4abe291b869f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:16:59 np0005603787 podman[241107]: 2026-01-31 10:16:59.7918473 +0000 UTC m=+0.168119110 container init 66a5e3e8b32eb94518df5336e0aa264cf886b65946b274b6b8c1f9f6d936de94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_elbakyan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 05:16:59 np0005603787 podman[241107]: 2026-01-31 10:16:59.800920915 +0000 UTC m=+0.177192705 container start 66a5e3e8b32eb94518df5336e0aa264cf886b65946b274b6b8c1f9f6d936de94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_elbakyan, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:16:59 np0005603787 podman[241107]: 2026-01-31 10:16:59.805062361 +0000 UTC m=+0.181334151 container attach 66a5e3e8b32eb94518df5336e0aa264cf886b65946b274b6b8c1f9f6d936de94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 05:16:59 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:16:59 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3911738054' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:17:00 np0005603787 nova_compute[238603]: 2026-01-31 10:17:00.001 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:17:00 np0005603787 nova_compute[238603]: 2026-01-31 10:17:00.153 238607 WARNING nova.virt.libvirt.driver [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 05:17:00 np0005603787 nova_compute[238603]: 2026-01-31 10:17:00.155 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5084MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 05:17:00 np0005603787 nova_compute[238603]: 2026-01-31 10:17:00.155 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:17:00 np0005603787 nova_compute[238603]: 2026-01-31 10:17:00.155 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:17:00 np0005603787 nova_compute[238603]: 2026-01-31 10:17:00.230 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 05:17:00 np0005603787 nova_compute[238603]: 2026-01-31 10:17:00.230 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 05:17:00 np0005603787 nova_compute[238603]: 2026-01-31 10:17:00.248 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:17:00 np0005603787 lvm[241225]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:17:00 np0005603787 lvm[241225]: VG ceph_vg0 finished
Jan 31 05:17:00 np0005603787 lvm[241226]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:17:00 np0005603787 lvm[241226]: VG ceph_vg1 finished
Jan 31 05:17:00 np0005603787 lvm[241228]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:17:00 np0005603787 lvm[241228]: VG ceph_vg2 finished
Jan 31 05:17:00 np0005603787 suspicious_elbakyan[241125]: {}
Jan 31 05:17:00 np0005603787 systemd[1]: libpod-66a5e3e8b32eb94518df5336e0aa264cf886b65946b274b6b8c1f9f6d936de94.scope: Deactivated successfully.
Jan 31 05:17:00 np0005603787 podman[241107]: 2026-01-31 10:17:00.5307238 +0000 UTC m=+0.906995600 container died 66a5e3e8b32eb94518df5336e0aa264cf886b65946b274b6b8c1f9f6d936de94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_elbakyan, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:17:00 np0005603787 systemd[1]: libpod-66a5e3e8b32eb94518df5336e0aa264cf886b65946b274b6b8c1f9f6d936de94.scope: Consumed 1.016s CPU time.
Jan 31 05:17:00 np0005603787 systemd[1]: var-lib-containers-storage-overlay-851cdf36e1ee631ef37c040e5aabb5cd7b004389cc1f1ccac83a4abe291b869f-merged.mount: Deactivated successfully.
Jan 31 05:17:00 np0005603787 podman[241107]: 2026-01-31 10:17:00.56629245 +0000 UTC m=+0.942564250 container remove 66a5e3e8b32eb94518df5336e0aa264cf886b65946b274b6b8c1f9f6d936de94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_elbakyan, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 05:17:00 np0005603787 systemd[1]: libpod-conmon-66a5e3e8b32eb94518df5336e0aa264cf886b65946b274b6b8c1f9f6d936de94.scope: Deactivated successfully.
Jan 31 05:17:00 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:17:00 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:17:00 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:17:00 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:17:00 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:17:00 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:17:00 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/52331009' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:17:00 np0005603787 nova_compute[238603]: 2026-01-31 10:17:00.785 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:17:00 np0005603787 nova_compute[238603]: 2026-01-31 10:17:00.791 238607 DEBUG nova.compute.provider_tree [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed in ProviderTree for provider: 207962d2-1ba9-4db2-8533-2a30e7131f3e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 05:17:00 np0005603787 nova_compute[238603]: 2026-01-31 10:17:00.810 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed for provider 207962d2-1ba9-4db2-8533-2a30e7131f3e based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 05:17:00 np0005603787 nova_compute[238603]: 2026-01-31 10:17:00.813 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 05:17:00 np0005603787 nova_compute[238603]: 2026-01-31 10:17:00.814 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:17:01 np0005603787 nova_compute[238603]: 2026-01-31 10:17:01.442 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:17:01 np0005603787 nova_compute[238603]: 2026-01-31 10:17:01.442 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:17:01 np0005603787 nova_compute[238603]: 2026-01-31 10:17:01.442 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 05:17:01 np0005603787 nova_compute[238603]: 2026-01-31 10:17:01.443 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 05:17:01 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:17:01 np0005603787 nova_compute[238603]: 2026-01-31 10:17:01.591 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 05:17:01 np0005603787 nova_compute[238603]: 2026-01-31 10:17:01.592 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:17:01 np0005603787 nova_compute[238603]: 2026-01-31 10:17:01.593 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:17:01 np0005603787 nova_compute[238603]: 2026-01-31 10:17:01.593 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:17:01 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:17:02 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:17:03 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:17:05 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:17:07 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:17:07 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:17:09 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:17:11 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:17:12 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:17:13 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:17:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:17:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:17:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:17:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:17:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:17:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:17:15 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:17:17 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:17:17 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:17:17 np0005603787 podman[241273]: 2026-01-31 10:17:17.860161695 +0000 UTC m=+0.079781265 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 05:17:17 np0005603787 podman[241272]: 2026-01-31 10:17:17.876415943 +0000 UTC m=+0.094926851 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 31 05:17:19 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:17:21 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 05:17:21 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3373 writes, 15K keys, 3373 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 3373 writes, 3373 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1296 writes, 5882 keys, 1296 commit groups, 1.0 writes per commit group, ingest: 8.66 MB, 0.01 MB/s#012Interval WAL: 1296 writes, 1296 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    155.6      0.10              0.04         7    0.015       0      0       0.0       0.0#012  L6      1/0    7.01 MB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   2.6    184.2    151.5      0.28              0.09         6    0.047     24K   3200       0.0       0.0#012 Sum      1/0    7.01 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.6    134.4    152.6      0.39              0.12        13    0.030     24K   3200       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.8    151.7    152.4      0.23              0.08         8    0.029     17K   2469       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   0.0    184.2    151.5      0.28              0.09         6    0.047     24K   3200       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    160.6      0.10              0.04         6    0.017       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     15.9      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.016, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.06 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 0.4 seconds#012Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b1fd4298d0#2 capacity: 308.00 MB usage: 1.93 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(105,1.71 MB,0.554038%) FilterBlock(14,75.86 KB,0.0240524%) IndexBlock(14,153.30 KB,0.0486052%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 05:17:21 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:17:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 05:17:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3813217108' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 05:17:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 05:17:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3813217108' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 05:17:22 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:17:23 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:17:25 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:17:27 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:17:27 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:17:29 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:17:31 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:17:32 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:17:33 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:17:35 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:17:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:17:37.058 154765 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:17:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:17:37.058 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:17:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:17:37.058 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:17:37 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:17:37 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:17:39 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:17:41 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:17:42 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:17:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_10:17:43
Jan 31 05:17:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:17:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] do_upmap
Jan 31 05:17:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] pools ['backups', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', '.rgw.root', 'default.rgw.log', 'volumes', 'images']
Jan 31 05:17:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:17:43 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:17:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:17:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:17:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:17:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:17:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:17:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:17:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:17:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:17:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:17:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:17:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:17:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:17:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:17:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:17:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:17:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:17:45 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:17:47 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:17:47 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:17:48 np0005603787 podman[241315]: 2026-01-31 10:17:48.853778431 +0000 UTC m=+0.071642835 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller)
Jan 31 05:17:48 np0005603787 podman[241316]: 2026-01-31 10:17:48.854122741 +0000 UTC m=+0.071830451 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 05:17:49 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:17:51 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:17:52 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:17:53 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:17:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:17:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:17:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:17:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:17:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:17:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:17:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:17:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:17:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:17:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:17:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:17:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:17:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1786947556520692e-06 of space, bias 4.0, pg target 0.0014144337067824831 quantized to 16 (current 16)
Jan 31 05:17:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:17:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:17:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:17:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:17:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:17:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 05:17:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:17:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:17:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:17:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:17:55 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:17:57 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:17:57 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:17:59 np0005603787 nova_compute[238603]: 2026-01-31 10:17:59.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:17:59 np0005603787 nova_compute[238603]: 2026-01-31 10:17:59.122 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:17:59 np0005603787 nova_compute[238603]: 2026-01-31 10:17:59.122 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:17:59 np0005603787 nova_compute[238603]: 2026-01-31 10:17:59.122 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:17:59 np0005603787 nova_compute[238603]: 2026-01-31 10:17:59.123 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 05:17:59 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:18:00 np0005603787 nova_compute[238603]: 2026-01-31 10:18:00.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:18:00 np0005603787 nova_compute[238603]: 2026-01-31 10:18:00.103 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:18:00 np0005603787 nova_compute[238603]: 2026-01-31 10:18:00.103 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 05:18:00 np0005603787 nova_compute[238603]: 2026-01-31 10:18:00.103 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 05:18:00 np0005603787 nova_compute[238603]: 2026-01-31 10:18:00.124 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 05:18:00 np0005603787 nova_compute[238603]: 2026-01-31 10:18:00.124 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:18:00 np0005603787 nova_compute[238603]: 2026-01-31 10:18:00.124 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:18:00 np0005603787 nova_compute[238603]: 2026-01-31 10:18:00.155 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:18:00 np0005603787 nova_compute[238603]: 2026-01-31 10:18:00.155 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:18:00 np0005603787 nova_compute[238603]: 2026-01-31 10:18:00.155 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:18:00 np0005603787 nova_compute[238603]: 2026-01-31 10:18:00.156 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 05:18:00 np0005603787 nova_compute[238603]: 2026-01-31 10:18:00.156 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:18:00 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:18:00 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1280378083' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:18:00 np0005603787 nova_compute[238603]: 2026-01-31 10:18:00.719 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:18:00 np0005603787 nova_compute[238603]: 2026-01-31 10:18:00.871 238607 WARNING nova.virt.libvirt.driver [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 05:18:00 np0005603787 nova_compute[238603]: 2026-01-31 10:18:00.872 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5167MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 05:18:00 np0005603787 nova_compute[238603]: 2026-01-31 10:18:00.873 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:18:00 np0005603787 nova_compute[238603]: 2026-01-31 10:18:00.873 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:18:00 np0005603787 nova_compute[238603]: 2026-01-31 10:18:00.969 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 05:18:00 np0005603787 nova_compute[238603]: 2026-01-31 10:18:00.969 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 05:18:00 np0005603787 nova_compute[238603]: 2026-01-31 10:18:00.994 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:18:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:18:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:18:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:18:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:18:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:18:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:18:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:18:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:18:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:18:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:18:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:18:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:18:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:18:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1088244057' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:18:01 np0005603787 nova_compute[238603]: 2026-01-31 10:18:01.516 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:18:01 np0005603787 nova_compute[238603]: 2026-01-31 10:18:01.521 238607 DEBUG nova.compute.provider_tree [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed in ProviderTree for provider: 207962d2-1ba9-4db2-8533-2a30e7131f3e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 05:18:01 np0005603787 nova_compute[238603]: 2026-01-31 10:18:01.534 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed for provider 207962d2-1ba9-4db2-8533-2a30e7131f3e based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 05:18:01 np0005603787 nova_compute[238603]: 2026-01-31 10:18:01.536 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 05:18:01 np0005603787 nova_compute[238603]: 2026-01-31 10:18:01.536 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:18:01 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:18:01 np0005603787 podman[241541]: 2026-01-31 10:18:01.686528601 +0000 UTC m=+0.047943379 container create eac7867e430712b260b7f1e32952e3ef8e79cef2424d58e6139c2830ca91c353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_heisenberg, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 05:18:01 np0005603787 systemd[1]: Started libpod-conmon-eac7867e430712b260b7f1e32952e3ef8e79cef2424d58e6139c2830ca91c353.scope.
Jan 31 05:18:01 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:18:01 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:18:01 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:18:01 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:18:01 np0005603787 podman[241541]: 2026-01-31 10:18:01.664822201 +0000 UTC m=+0.026237079 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:18:01 np0005603787 podman[241541]: 2026-01-31 10:18:01.76931401 +0000 UTC m=+0.130728808 container init eac7867e430712b260b7f1e32952e3ef8e79cef2424d58e6139c2830ca91c353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_heisenberg, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:18:01 np0005603787 podman[241541]: 2026-01-31 10:18:01.77820414 +0000 UTC m=+0.139618928 container start eac7867e430712b260b7f1e32952e3ef8e79cef2424d58e6139c2830ca91c353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 05:18:01 np0005603787 podman[241541]: 2026-01-31 10:18:01.782044978 +0000 UTC m=+0.143459756 container attach eac7867e430712b260b7f1e32952e3ef8e79cef2424d58e6139c2830ca91c353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_heisenberg, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 05:18:01 np0005603787 youthful_heisenberg[241557]: 167 167
Jan 31 05:18:01 np0005603787 systemd[1]: libpod-eac7867e430712b260b7f1e32952e3ef8e79cef2424d58e6139c2830ca91c353.scope: Deactivated successfully.
Jan 31 05:18:01 np0005603787 podman[241541]: 2026-01-31 10:18:01.786798901 +0000 UTC m=+0.148213719 container died eac7867e430712b260b7f1e32952e3ef8e79cef2424d58e6139c2830ca91c353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_heisenberg, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 05:18:01 np0005603787 systemd[1]: var-lib-containers-storage-overlay-8f4a29fb658d66dfdb521770b1899e1c857ca8b6c51c69a284d0e238c4032755-merged.mount: Deactivated successfully.
Jan 31 05:18:01 np0005603787 podman[241541]: 2026-01-31 10:18:01.828390401 +0000 UTC m=+0.189805189 container remove eac7867e430712b260b7f1e32952e3ef8e79cef2424d58e6139c2830ca91c353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_heisenberg, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 05:18:01 np0005603787 systemd[1]: libpod-conmon-eac7867e430712b260b7f1e32952e3ef8e79cef2424d58e6139c2830ca91c353.scope: Deactivated successfully.
Jan 31 05:18:02 np0005603787 podman[241581]: 2026-01-31 10:18:02.000241914 +0000 UTC m=+0.057301142 container create 164af5019f0fd34b927e77ec63e7da430fd2e460b5e4842ea0f527f19a2b0944 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_montalcini, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 05:18:02 np0005603787 systemd[1]: Started libpod-conmon-164af5019f0fd34b927e77ec63e7da430fd2e460b5e4842ea0f527f19a2b0944.scope.
Jan 31 05:18:02 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:18:02 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a1c70babf5910b018afb5f0b53fefb63041fe893f5eceb9f05931d102fcf32a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:18:02 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a1c70babf5910b018afb5f0b53fefb63041fe893f5eceb9f05931d102fcf32a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:18:02 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a1c70babf5910b018afb5f0b53fefb63041fe893f5eceb9f05931d102fcf32a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:18:02 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a1c70babf5910b018afb5f0b53fefb63041fe893f5eceb9f05931d102fcf32a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:18:02 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a1c70babf5910b018afb5f0b53fefb63041fe893f5eceb9f05931d102fcf32a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:18:02 np0005603787 podman[241581]: 2026-01-31 10:18:01.976147386 +0000 UTC m=+0.033206654 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:18:02 np0005603787 podman[241581]: 2026-01-31 10:18:02.082781555 +0000 UTC m=+0.139840753 container init 164af5019f0fd34b927e77ec63e7da430fd2e460b5e4842ea0f527f19a2b0944 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_montalcini, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:18:02 np0005603787 podman[241581]: 2026-01-31 10:18:02.088458765 +0000 UTC m=+0.145517943 container start 164af5019f0fd34b927e77ec63e7da430fd2e460b5e4842ea0f527f19a2b0944 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_montalcini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 05:18:02 np0005603787 podman[241581]: 2026-01-31 10:18:02.092222211 +0000 UTC m=+0.149281389 container attach 164af5019f0fd34b927e77ec63e7da430fd2e460b5e4842ea0f527f19a2b0944 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_montalcini, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:18:02 np0005603787 zen_montalcini[241597]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:18:02 np0005603787 zen_montalcini[241597]: --> All data devices are unavailable
Jan 31 05:18:02 np0005603787 nova_compute[238603]: 2026-01-31 10:18:02.514 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:18:02 np0005603787 systemd[1]: libpod-164af5019f0fd34b927e77ec63e7da430fd2e460b5e4842ea0f527f19a2b0944.scope: Deactivated successfully.
Jan 31 05:18:02 np0005603787 podman[241581]: 2026-01-31 10:18:02.520463295 +0000 UTC m=+0.577522483 container died 164af5019f0fd34b927e77ec63e7da430fd2e460b5e4842ea0f527f19a2b0944 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_montalcini, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 05:18:02 np0005603787 systemd[1]: var-lib-containers-storage-overlay-5a1c70babf5910b018afb5f0b53fefb63041fe893f5eceb9f05931d102fcf32a-merged.mount: Deactivated successfully.
Jan 31 05:18:02 np0005603787 podman[241581]: 2026-01-31 10:18:02.568361322 +0000 UTC m=+0.625420500 container remove 164af5019f0fd34b927e77ec63e7da430fd2e460b5e4842ea0f527f19a2b0944 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:18:02 np0005603787 systemd[1]: libpod-conmon-164af5019f0fd34b927e77ec63e7da430fd2e460b5e4842ea0f527f19a2b0944.scope: Deactivated successfully.
Jan 31 05:18:02 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:18:02 np0005603787 podman[241694]: 2026-01-31 10:18:02.958298629 +0000 UTC m=+0.040682435 container create 5415522639d89d58472cf7bae21e0bfe165a16397e063e1aecfd6234c10fea0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_noyce, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:18:02 np0005603787 systemd[1]: Started libpod-conmon-5415522639d89d58472cf7bae21e0bfe165a16397e063e1aecfd6234c10fea0e.scope.
Jan 31 05:18:03 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:18:03 np0005603787 podman[241694]: 2026-01-31 10:18:02.940142518 +0000 UTC m=+0.022526314 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:18:03 np0005603787 podman[241694]: 2026-01-31 10:18:03.038649159 +0000 UTC m=+0.121032945 container init 5415522639d89d58472cf7bae21e0bfe165a16397e063e1aecfd6234c10fea0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_noyce, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 05:18:03 np0005603787 podman[241694]: 2026-01-31 10:18:03.047406295 +0000 UTC m=+0.129790091 container start 5415522639d89d58472cf7bae21e0bfe165a16397e063e1aecfd6234c10fea0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_noyce, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3)
Jan 31 05:18:03 np0005603787 podman[241694]: 2026-01-31 10:18:03.050605425 +0000 UTC m=+0.132989221 container attach 5415522639d89d58472cf7bae21e0bfe165a16397e063e1aecfd6234c10fea0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:18:03 np0005603787 serene_noyce[241711]: 167 167
Jan 31 05:18:03 np0005603787 systemd[1]: libpod-5415522639d89d58472cf7bae21e0bfe165a16397e063e1aecfd6234c10fea0e.scope: Deactivated successfully.
Jan 31 05:18:03 np0005603787 podman[241694]: 2026-01-31 10:18:03.052986832 +0000 UTC m=+0.135370598 container died 5415522639d89d58472cf7bae21e0bfe165a16397e063e1aecfd6234c10fea0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_noyce, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:18:03 np0005603787 systemd[1]: var-lib-containers-storage-overlay-abd3dd60f4ef894763aa47024da5077fa394c7ef019c84697b0fd280541b73de-merged.mount: Deactivated successfully.
Jan 31 05:18:03 np0005603787 podman[241694]: 2026-01-31 10:18:03.084179609 +0000 UTC m=+0.166563395 container remove 5415522639d89d58472cf7bae21e0bfe165a16397e063e1aecfd6234c10fea0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 05:18:03 np0005603787 systemd[1]: libpod-conmon-5415522639d89d58472cf7bae21e0bfe165a16397e063e1aecfd6234c10fea0e.scope: Deactivated successfully.
Jan 31 05:18:03 np0005603787 nova_compute[238603]: 2026-01-31 10:18:03.104 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:18:03 np0005603787 podman[241736]: 2026-01-31 10:18:03.240269789 +0000 UTC m=+0.049405540 container create 5cb04940879cb31d7f3020c9bf93a7f821db384048db5ea1ed4b22e924acafd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_kowalevski, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2)
Jan 31 05:18:03 np0005603787 systemd[1]: Started libpod-conmon-5cb04940879cb31d7f3020c9bf93a7f821db384048db5ea1ed4b22e924acafd3.scope.
Jan 31 05:18:03 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:18:03 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53f99c3097407bee746129d5b801fb150da714813a6751e9d86456bb31b47f43/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:18:03 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53f99c3097407bee746129d5b801fb150da714813a6751e9d86456bb31b47f43/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:18:03 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53f99c3097407bee746129d5b801fb150da714813a6751e9d86456bb31b47f43/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:18:03 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53f99c3097407bee746129d5b801fb150da714813a6751e9d86456bb31b47f43/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:18:03 np0005603787 podman[241736]: 2026-01-31 10:18:03.22256521 +0000 UTC m=+0.031700951 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:18:03 np0005603787 podman[241736]: 2026-01-31 10:18:03.326399262 +0000 UTC m=+0.135535033 container init 5cb04940879cb31d7f3020c9bf93a7f821db384048db5ea1ed4b22e924acafd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_kowalevski, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:18:03 np0005603787 podman[241736]: 2026-01-31 10:18:03.331264108 +0000 UTC m=+0.140399859 container start 5cb04940879cb31d7f3020c9bf93a7f821db384048db5ea1ed4b22e924acafd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_kowalevski, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:18:03 np0005603787 podman[241736]: 2026-01-31 10:18:03.337285277 +0000 UTC m=+0.146421048 container attach 5cb04940879cb31d7f3020c9bf93a7f821db384048db5ea1ed4b22e924acafd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_kowalevski, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:18:03 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]: {
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:    "0": [
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:        {
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:            "devices": [
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "/dev/loop3"
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:            ],
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:            "lv_name": "ceph_lv0",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:            "lv_size": "21470642176",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:            "name": "ceph_lv0",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:            "tags": {
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "ceph.cluster_name": "ceph",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "ceph.crush_device_class": "",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "ceph.encrypted": "0",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "ceph.objectstore": "bluestore",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "ceph.osd_id": "0",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "ceph.type": "block",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "ceph.vdo": "0",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "ceph.with_tpm": "0"
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:            },
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:            "type": "block",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:            "vg_name": "ceph_vg0"
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:        }
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:    ],
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:    "1": [
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:        {
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:            "devices": [
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "/dev/loop4"
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:            ],
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:            "lv_name": "ceph_lv1",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:            "lv_size": "21470642176",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:            "name": "ceph_lv1",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:            "tags": {
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "ceph.cluster_name": "ceph",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "ceph.crush_device_class": "",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "ceph.encrypted": "0",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "ceph.objectstore": "bluestore",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "ceph.osd_id": "1",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "ceph.type": "block",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "ceph.vdo": "0",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "ceph.with_tpm": "0"
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:            },
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:            "type": "block",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:            "vg_name": "ceph_vg1"
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:        }
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:    ],
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:    "2": [
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:        {
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:            "devices": [
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "/dev/loop5"
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:            ],
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:            "lv_name": "ceph_lv2",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:            "lv_size": "21470642176",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:            "name": "ceph_lv2",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:            "tags": {
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "ceph.cluster_name": "ceph",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "ceph.crush_device_class": "",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "ceph.encrypted": "0",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "ceph.objectstore": "bluestore",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "ceph.osd_id": "2",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "ceph.type": "block",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "ceph.vdo": "0",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:                "ceph.with_tpm": "0"
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:            },
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:            "type": "block",
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:            "vg_name": "ceph_vg2"
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:        }
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]:    ]
Jan 31 05:18:03 np0005603787 competent_kowalevski[241752]: }
Jan 31 05:18:03 np0005603787 systemd[1]: libpod-5cb04940879cb31d7f3020c9bf93a7f821db384048db5ea1ed4b22e924acafd3.scope: Deactivated successfully.
Jan 31 05:18:03 np0005603787 podman[241736]: 2026-01-31 10:18:03.641656988 +0000 UTC m=+0.450792729 container died 5cb04940879cb31d7f3020c9bf93a7f821db384048db5ea1ed4b22e924acafd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_kowalevski, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030)
Jan 31 05:18:03 np0005603787 systemd[1]: var-lib-containers-storage-overlay-53f99c3097407bee746129d5b801fb150da714813a6751e9d86456bb31b47f43-merged.mount: Deactivated successfully.
Jan 31 05:18:03 np0005603787 podman[241736]: 2026-01-31 10:18:03.686124699 +0000 UTC m=+0.495260470 container remove 5cb04940879cb31d7f3020c9bf93a7f821db384048db5ea1ed4b22e924acafd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_kowalevski, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 05:18:03 np0005603787 systemd[1]: libpod-conmon-5cb04940879cb31d7f3020c9bf93a7f821db384048db5ea1ed4b22e924acafd3.scope: Deactivated successfully.
Jan 31 05:18:04 np0005603787 podman[241837]: 2026-01-31 10:18:04.090846931 +0000 UTC m=+0.048694031 container create 6b5923ef92595ae2ea17ea0bf8db5c9ae22d758af63841dc785c86ff94746e56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_engelbart, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 05:18:04 np0005603787 systemd[1]: Started libpod-conmon-6b5923ef92595ae2ea17ea0bf8db5c9ae22d758af63841dc785c86ff94746e56.scope.
Jan 31 05:18:04 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:18:04 np0005603787 podman[241837]: 2026-01-31 10:18:04.158709199 +0000 UTC m=+0.116556329 container init 6b5923ef92595ae2ea17ea0bf8db5c9ae22d758af63841dc785c86ff94746e56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 31 05:18:04 np0005603787 podman[241837]: 2026-01-31 10:18:04.06555135 +0000 UTC m=+0.023398550 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:18:04 np0005603787 podman[241837]: 2026-01-31 10:18:04.164121262 +0000 UTC m=+0.121968382 container start 6b5923ef92595ae2ea17ea0bf8db5c9ae22d758af63841dc785c86ff94746e56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 05:18:04 np0005603787 podman[241837]: 2026-01-31 10:18:04.167085255 +0000 UTC m=+0.124932385 container attach 6b5923ef92595ae2ea17ea0bf8db5c9ae22d758af63841dc785c86ff94746e56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_engelbart, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 05:18:04 np0005603787 nostalgic_engelbart[241853]: 167 167
Jan 31 05:18:04 np0005603787 systemd[1]: libpod-6b5923ef92595ae2ea17ea0bf8db5c9ae22d758af63841dc785c86ff94746e56.scope: Deactivated successfully.
Jan 31 05:18:04 np0005603787 podman[241837]: 2026-01-31 10:18:04.170057478 +0000 UTC m=+0.127904598 container died 6b5923ef92595ae2ea17ea0bf8db5c9ae22d758af63841dc785c86ff94746e56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_engelbart, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:18:04 np0005603787 systemd[1]: var-lib-containers-storage-overlay-6dc5262eee712b7965a76e9721ac8894804552e0bb6eb807c9881048732ff6a8-merged.mount: Deactivated successfully.
Jan 31 05:18:04 np0005603787 podman[241837]: 2026-01-31 10:18:04.210306971 +0000 UTC m=+0.168154081 container remove 6b5923ef92595ae2ea17ea0bf8db5c9ae22d758af63841dc785c86ff94746e56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_engelbart, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:18:04 np0005603787 systemd[1]: libpod-conmon-6b5923ef92595ae2ea17ea0bf8db5c9ae22d758af63841dc785c86ff94746e56.scope: Deactivated successfully.
Jan 31 05:18:04 np0005603787 podman[241878]: 2026-01-31 10:18:04.339840183 +0000 UTC m=+0.039072839 container create 6e6352344eb38afa063d686260460d01bf82771b3e02ac9adc374bb97715f6c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 05:18:04 np0005603787 systemd[1]: Started libpod-conmon-6e6352344eb38afa063d686260460d01bf82771b3e02ac9adc374bb97715f6c3.scope.
Jan 31 05:18:04 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:18:04 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba1ea0a49db09f3d621ca589a1aad46e665712a42f1d0f69ea8ed2c079a7e9f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:18:04 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba1ea0a49db09f3d621ca589a1aad46e665712a42f1d0f69ea8ed2c079a7e9f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:18:04 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba1ea0a49db09f3d621ca589a1aad46e665712a42f1d0f69ea8ed2c079a7e9f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:18:04 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba1ea0a49db09f3d621ca589a1aad46e665712a42f1d0f69ea8ed2c079a7e9f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:18:04 np0005603787 podman[241878]: 2026-01-31 10:18:04.321540458 +0000 UTC m=+0.020773134 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:18:04 np0005603787 podman[241878]: 2026-01-31 10:18:04.438055606 +0000 UTC m=+0.137288312 container init 6e6352344eb38afa063d686260460d01bf82771b3e02ac9adc374bb97715f6c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_wu, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:18:04 np0005603787 podman[241878]: 2026-01-31 10:18:04.444561208 +0000 UTC m=+0.143793884 container start 6e6352344eb38afa063d686260460d01bf82771b3e02ac9adc374bb97715f6c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 05:18:04 np0005603787 podman[241878]: 2026-01-31 10:18:04.448787687 +0000 UTC m=+0.148020423 container attach 6e6352344eb38afa063d686260460d01bf82771b3e02ac9adc374bb97715f6c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_wu, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 05:18:05 np0005603787 lvm[241973]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:18:05 np0005603787 lvm[241973]: VG ceph_vg1 finished
Jan 31 05:18:05 np0005603787 lvm[241971]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:18:05 np0005603787 lvm[241971]: VG ceph_vg0 finished
Jan 31 05:18:05 np0005603787 lvm[241975]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:18:05 np0005603787 lvm[241975]: VG ceph_vg2 finished
Jan 31 05:18:05 np0005603787 cool_wu[241894]: {}
Jan 31 05:18:05 np0005603787 systemd[1]: libpod-6e6352344eb38afa063d686260460d01bf82771b3e02ac9adc374bb97715f6c3.scope: Deactivated successfully.
Jan 31 05:18:05 np0005603787 systemd[1]: libpod-6e6352344eb38afa063d686260460d01bf82771b3e02ac9adc374bb97715f6c3.scope: Consumed 1.126s CPU time.
Jan 31 05:18:05 np0005603787 podman[241878]: 2026-01-31 10:18:05.224513624 +0000 UTC m=+0.923746280 container died 6e6352344eb38afa063d686260460d01bf82771b3e02ac9adc374bb97715f6c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_wu, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:18:05 np0005603787 systemd[1]: var-lib-containers-storage-overlay-ba1ea0a49db09f3d621ca589a1aad46e665712a42f1d0f69ea8ed2c079a7e9f6-merged.mount: Deactivated successfully.
Jan 31 05:18:05 np0005603787 podman[241878]: 2026-01-31 10:18:05.269565171 +0000 UTC m=+0.968797827 container remove 6e6352344eb38afa063d686260460d01bf82771b3e02ac9adc374bb97715f6c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:18:05 np0005603787 systemd[1]: libpod-conmon-6e6352344eb38afa063d686260460d01bf82771b3e02ac9adc374bb97715f6c3.scope: Deactivated successfully.
Jan 31 05:18:05 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:18:05 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:18:05 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:18:05 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:18:05 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:18:06 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:18:06 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:18:07 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:18:07 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:18:09 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:18:11 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:18:12 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:18:13 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:18:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:18:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:18:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:18:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:18:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:18:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:18:15 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:18:17 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:18:17 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:18:19 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:18:19 np0005603787 podman[242014]: 2026-01-31 10:18:19.84418765 +0000 UTC m=+0.060970442 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:18:19 np0005603787 podman[242013]: 2026-01-31 10:18:19.874263829 +0000 UTC m=+0.092999853 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:18:21 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:18:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 05:18:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3497495207' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 05:18:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 05:18:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3497495207' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 05:18:22 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:18:23 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:18:24 np0005603787 ceph-osd[85879]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 05:18:24 np0005603787 ceph-osd[85879]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5867 writes, 24K keys, 5867 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5867 writes, 1015 syncs, 5.78 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s#012Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55627ce7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55627ce7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_
Jan 31 05:18:25 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:18:27 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:18:27 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:18:29 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:18:30 np0005603787 ceph-osd[86934]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 05:18:30 np0005603787 ceph-osd[86934]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1202.2 total, 600.0 interval#012Cumulative writes: 7121 writes, 29K keys, 7121 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 7121 writes, 1410 syncs, 5.05 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 224 writes, 337 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1202.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558daf9bba30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1202.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558daf9bba30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1202.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_
Jan 31 05:18:31 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:18:32 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:18:33 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:18:35 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:18:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:18:37.058 154765 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:18:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:18:37.059 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:18:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:18:37.060 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:18:37 np0005603787 ceph-osd[87996]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 05:18:37 np0005603787 ceph-osd[87996]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.3 total, 600.0 interval#012Cumulative writes: 5653 writes, 24K keys, 5653 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5653 writes, 897 syncs, 6.30 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c8dcfd98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c8dcfd98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Jan 31 05:18:37 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:18:37 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:18:39 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:18:41 np0005603787 ceph-mgr[75453]: [devicehealth INFO root] Check health
Jan 31 05:18:41 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:18:42 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:18:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_10:18:43
Jan 31 05:18:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:18:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] do_upmap
Jan 31 05:18:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'backups', 'volumes', 'vms', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data']
Jan 31 05:18:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:18:43 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:18:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:18:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:18:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:18:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:18:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:18:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:18:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:18:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:18:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:18:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:18:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:18:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:18:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:18:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:18:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:18:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:18:45 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:18:47 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:18:47 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:18:49 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:18:50 np0005603787 podman[242059]: 2026-01-31 10:18:50.885606704 +0000 UTC m=+0.100360901 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 05:18:50 np0005603787 podman[242058]: 2026-01-31 10:18:50.915899792 +0000 UTC m=+0.131613915 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 05:18:51 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:18:52 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:18:53 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:18:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:18:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:18:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:18:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:18:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:18:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:18:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:18:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:18:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:18:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:18:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:18:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:18:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1786947556520692e-06 of space, bias 4.0, pg target 0.0014144337067824831 quantized to 16 (current 16)
Jan 31 05:18:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:18:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:18:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:18:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:18:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:18:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 05:18:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:18:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:18:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:18:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:18:55 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:18:57 np0005603787 nova_compute[238603]: 2026-01-31 10:18:57.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:18:57 np0005603787 nova_compute[238603]: 2026-01-31 10:18:57.103 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 05:18:57 np0005603787 nova_compute[238603]: 2026-01-31 10:18:57.130 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 05:18:57 np0005603787 nova_compute[238603]: 2026-01-31 10:18:57.130 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:18:57 np0005603787 nova_compute[238603]: 2026-01-31 10:18:57.131 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 05:18:57 np0005603787 nova_compute[238603]: 2026-01-31 10:18:57.153 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:18:57 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:18:57 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:18:58 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Jan 31 05:18:58 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Jan 31 05:18:58 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Jan 31 05:18:59 np0005603787 nova_compute[238603]: 2026-01-31 10:18:59.165 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:18:59 np0005603787 nova_compute[238603]: 2026-01-31 10:18:59.165 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:18:59 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:18:59 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Jan 31 05:18:59 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Jan 31 05:18:59 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Jan 31 05:19:00 np0005603787 nova_compute[238603]: 2026-01-31 10:19:00.103 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:19:00 np0005603787 nova_compute[238603]: 2026-01-31 10:19:00.104 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 05:19:00 np0005603787 nova_compute[238603]: 2026-01-31 10:19:00.104 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 05:19:00 np0005603787 nova_compute[238603]: 2026-01-31 10:19:00.122 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 05:19:00 np0005603787 nova_compute[238603]: 2026-01-31 10:19:00.123 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:19:00 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Jan 31 05:19:00 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Jan 31 05:19:00 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Jan 31 05:19:01 np0005603787 nova_compute[238603]: 2026-01-31 10:19:01.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:19:01 np0005603787 nova_compute[238603]: 2026-01-31 10:19:01.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:19:01 np0005603787 nova_compute[238603]: 2026-01-31 10:19:01.103 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 05:19:01 np0005603787 nova_compute[238603]: 2026-01-31 10:19:01.103 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:19:01 np0005603787 nova_compute[238603]: 2026-01-31 10:19:01.139 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:19:01 np0005603787 nova_compute[238603]: 2026-01-31 10:19:01.140 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:19:01 np0005603787 nova_compute[238603]: 2026-01-31 10:19:01.140 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:19:01 np0005603787 nova_compute[238603]: 2026-01-31 10:19:01.140 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 05:19:01 np0005603787 nova_compute[238603]: 2026-01-31 10:19:01.140 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:19:01 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 8.5 MiB data, 136 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 1.3 MiB/s wr, 1 op/s
Jan 31 05:19:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:19:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1849951344' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:19:01 np0005603787 nova_compute[238603]: 2026-01-31 10:19:01.630 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:19:01 np0005603787 nova_compute[238603]: 2026-01-31 10:19:01.814 238607 WARNING nova.virt.libvirt.driver [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 05:19:01 np0005603787 nova_compute[238603]: 2026-01-31 10:19:01.815 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5152MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 05:19:01 np0005603787 nova_compute[238603]: 2026-01-31 10:19:01.815 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:19:01 np0005603787 nova_compute[238603]: 2026-01-31 10:19:01.816 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:19:01 np0005603787 nova_compute[238603]: 2026-01-31 10:19:01.976 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 05:19:01 np0005603787 nova_compute[238603]: 2026-01-31 10:19:01.977 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 05:19:02 np0005603787 nova_compute[238603]: 2026-01-31 10:19:02.037 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Refreshing inventories for resource provider 207962d2-1ba9-4db2-8533-2a30e7131f3e _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 05:19:02 np0005603787 nova_compute[238603]: 2026-01-31 10:19:02.137 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Updating ProviderTree inventory for provider 207962d2-1ba9-4db2-8533-2a30e7131f3e from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 05:19:02 np0005603787 nova_compute[238603]: 2026-01-31 10:19:02.137 238607 DEBUG nova.compute.provider_tree [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Updating inventory in ProviderTree for provider 207962d2-1ba9-4db2-8533-2a30e7131f3e with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 05:19:02 np0005603787 nova_compute[238603]: 2026-01-31 10:19:02.161 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Refreshing aggregate associations for resource provider 207962d2-1ba9-4db2-8533-2a30e7131f3e, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 05:19:02 np0005603787 nova_compute[238603]: 2026-01-31 10:19:02.192 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Refreshing trait associations for resource provider 207962d2-1ba9-4db2-8533-2a30e7131f3e, traits: COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SVM,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AESNI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_FMA3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE41,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_RESCUE_BFV,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_MMX,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NODE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_SHA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 05:19:02 np0005603787 nova_compute[238603]: 2026-01-31 10:19:02.219 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:19:02 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:19:02 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:19:02 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1366403333' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:19:02 np0005603787 nova_compute[238603]: 2026-01-31 10:19:02.748 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:19:02 np0005603787 nova_compute[238603]: 2026-01-31 10:19:02.752 238607 DEBUG nova.compute.provider_tree [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed in ProviderTree for provider: 207962d2-1ba9-4db2-8533-2a30e7131f3e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 05:19:02 np0005603787 nova_compute[238603]: 2026-01-31 10:19:02.778 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed for provider 207962d2-1ba9-4db2-8533-2a30e7131f3e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 05:19:02 np0005603787 nova_compute[238603]: 2026-01-31 10:19:02.780 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 05:19:02 np0005603787 nova_compute[238603]: 2026-01-31 10:19:02.780 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.964s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:19:02 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Jan 31 05:19:03 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Jan 31 05:19:03 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Jan 31 05:19:03 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 24 MiB data, 144 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 5.0 MiB/s wr, 38 op/s
Jan 31 05:19:04 np0005603787 nova_compute[238603]: 2026-01-31 10:19:04.780 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:19:04 np0005603787 nova_compute[238603]: 2026-01-31 10:19:04.781 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:19:05 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 41 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 6.8 MiB/s wr, 62 op/s
Jan 31 05:19:05 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:19:05 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:19:05 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:19:05 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:19:06 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:19:06 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:19:06 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:19:06 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:19:06 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:19:06 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:19:06 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:19:06 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:19:06 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:19:06 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:19:06 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:19:06 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:19:06 np0005603787 podman[242364]: 2026-01-31 10:19:06.83163571 +0000 UTC m=+0.037741451 container create aeb41d0bf09af6ad06611c3a152ab98ebb1c38ee833320878487b78531f0e5b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_mccarthy, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:19:06 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:19:06 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:19:06 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:19:06 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:19:06 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:19:06 np0005603787 systemd[1]: Started libpod-conmon-aeb41d0bf09af6ad06611c3a152ab98ebb1c38ee833320878487b78531f0e5b5.scope.
Jan 31 05:19:06 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:19:06 np0005603787 podman[242364]: 2026-01-31 10:19:06.814625866 +0000 UTC m=+0.020731627 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:19:06 np0005603787 podman[242364]: 2026-01-31 10:19:06.920747743 +0000 UTC m=+0.126853504 container init aeb41d0bf09af6ad06611c3a152ab98ebb1c38ee833320878487b78531f0e5b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:19:06 np0005603787 podman[242364]: 2026-01-31 10:19:06.928518275 +0000 UTC m=+0.134624056 container start aeb41d0bf09af6ad06611c3a152ab98ebb1c38ee833320878487b78531f0e5b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_mccarthy, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 05:19:06 np0005603787 systemd[1]: libpod-aeb41d0bf09af6ad06611c3a152ab98ebb1c38ee833320878487b78531f0e5b5.scope: Deactivated successfully.
Jan 31 05:19:06 np0005603787 stupefied_mccarthy[242380]: 167 167
Jan 31 05:19:06 np0005603787 podman[242364]: 2026-01-31 10:19:06.933831161 +0000 UTC m=+0.139936952 container attach aeb41d0bf09af6ad06611c3a152ab98ebb1c38ee833320878487b78531f0e5b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_mccarthy, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 05:19:06 np0005603787 conmon[242380]: conmon aeb41d0bf09af6ad0661 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aeb41d0bf09af6ad06611c3a152ab98ebb1c38ee833320878487b78531f0e5b5.scope/container/memory.events
Jan 31 05:19:06 np0005603787 podman[242364]: 2026-01-31 10:19:06.934996963 +0000 UTC m=+0.141102704 container died aeb41d0bf09af6ad06611c3a152ab98ebb1c38ee833320878487b78531f0e5b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_mccarthy, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:19:06 np0005603787 systemd[1]: var-lib-containers-storage-overlay-fcecb3c40d893217361b0186fd77903648136ad6fce52b53be90c413c08b370b-merged.mount: Deactivated successfully.
Jan 31 05:19:06 np0005603787 podman[242364]: 2026-01-31 10:19:06.985961804 +0000 UTC m=+0.192067545 container remove aeb41d0bf09af6ad06611c3a152ab98ebb1c38ee833320878487b78531f0e5b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 05:19:07 np0005603787 systemd[1]: libpod-conmon-aeb41d0bf09af6ad06611c3a152ab98ebb1c38ee833320878487b78531f0e5b5.scope: Deactivated successfully.
Jan 31 05:19:07 np0005603787 podman[242403]: 2026-01-31 10:19:07.163380398 +0000 UTC m=+0.100345741 container create a4255783020a1311ca0aff366dc9e431d4cbac44e6de7143ac862406d5ab7037 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mendel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle)
Jan 31 05:19:07 np0005603787 podman[242403]: 2026-01-31 10:19:07.082694475 +0000 UTC m=+0.019659838 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:19:07 np0005603787 systemd[1]: Started libpod-conmon-a4255783020a1311ca0aff366dc9e431d4cbac44e6de7143ac862406d5ab7037.scope.
Jan 31 05:19:07 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:19:07 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5ce83b3fc42dd4326621657d1dab7ca1348b6cac00437bea485d82398e7058/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:19:07 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5ce83b3fc42dd4326621657d1dab7ca1348b6cac00437bea485d82398e7058/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:19:07 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5ce83b3fc42dd4326621657d1dab7ca1348b6cac00437bea485d82398e7058/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:19:07 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5ce83b3fc42dd4326621657d1dab7ca1348b6cac00437bea485d82398e7058/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:19:07 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5ce83b3fc42dd4326621657d1dab7ca1348b6cac00437bea485d82398e7058/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:19:07 np0005603787 podman[242403]: 2026-01-31 10:19:07.25867545 +0000 UTC m=+0.195640813 container init a4255783020a1311ca0aff366dc9e431d4cbac44e6de7143ac862406d5ab7037 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:19:07 np0005603787 podman[242403]: 2026-01-31 10:19:07.264856978 +0000 UTC m=+0.201822311 container start a4255783020a1311ca0aff366dc9e431d4cbac44e6de7143ac862406d5ab7037 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mendel, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:19:07 np0005603787 podman[242403]: 2026-01-31 10:19:07.268455737 +0000 UTC m=+0.205421080 container attach a4255783020a1311ca0aff366dc9e431d4cbac44e6de7143ac862406d5ab7037 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mendel, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 05:19:07 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 41 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.3 MiB/s wr, 48 op/s
Jan 31 05:19:07 np0005603787 friendly_mendel[242419]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:19:07 np0005603787 friendly_mendel[242419]: --> All data devices are unavailable
Jan 31 05:19:07 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:19:07 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Jan 31 05:19:07 np0005603787 systemd[1]: libpod-a4255783020a1311ca0aff366dc9e431d4cbac44e6de7143ac862406d5ab7037.scope: Deactivated successfully.
Jan 31 05:19:07 np0005603787 podman[242403]: 2026-01-31 10:19:07.742590941 +0000 UTC m=+0.679556354 container died a4255783020a1311ca0aff366dc9e431d4cbac44e6de7143ac862406d5ab7037 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mendel, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 05:19:08 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Jan 31 05:19:08 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Jan 31 05:19:08 np0005603787 systemd[1]: var-lib-containers-storage-overlay-7f5ce83b3fc42dd4326621657d1dab7ca1348b6cac00437bea485d82398e7058-merged.mount: Deactivated successfully.
Jan 31 05:19:08 np0005603787 podman[242403]: 2026-01-31 10:19:08.597129344 +0000 UTC m=+1.534094717 container remove a4255783020a1311ca0aff366dc9e431d4cbac44e6de7143ac862406d5ab7037 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mendel, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:19:08 np0005603787 systemd[1]: libpod-conmon-a4255783020a1311ca0aff366dc9e431d4cbac44e6de7143ac862406d5ab7037.scope: Deactivated successfully.
Jan 31 05:19:09 np0005603787 podman[242514]: 2026-01-31 10:19:09.041901817 +0000 UTC m=+0.020714426 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:19:09 np0005603787 podman[242514]: 2026-01-31 10:19:09.308718122 +0000 UTC m=+0.287530711 container create fac0f11788642d7517b3d847a2a2684bcb0162d9fbef3c96836251b786d16f74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_saha, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 05:19:09 np0005603787 systemd[1]: Started libpod-conmon-fac0f11788642d7517b3d847a2a2684bcb0162d9fbef3c96836251b786d16f74.scope.
Jan 31 05:19:09 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:19:09 np0005603787 podman[242514]: 2026-01-31 10:19:09.469734068 +0000 UTC m=+0.448546677 container init fac0f11788642d7517b3d847a2a2684bcb0162d9fbef3c96836251b786d16f74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 05:19:09 np0005603787 podman[242514]: 2026-01-31 10:19:09.474850838 +0000 UTC m=+0.453663457 container start fac0f11788642d7517b3d847a2a2684bcb0162d9fbef3c96836251b786d16f74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:19:09 np0005603787 crazy_saha[242530]: 167 167
Jan 31 05:19:09 np0005603787 systemd[1]: libpod-fac0f11788642d7517b3d847a2a2684bcb0162d9fbef3c96836251b786d16f74.scope: Deactivated successfully.
Jan 31 05:19:09 np0005603787 conmon[242530]: conmon fac0f11788642d7517b3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fac0f11788642d7517b3d847a2a2684bcb0162d9fbef3c96836251b786d16f74.scope/container/memory.events
Jan 31 05:19:09 np0005603787 podman[242514]: 2026-01-31 10:19:09.546063523 +0000 UTC m=+0.524876162 container attach fac0f11788642d7517b3d847a2a2684bcb0162d9fbef3c96836251b786d16f74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:19:09 np0005603787 podman[242514]: 2026-01-31 10:19:09.546775352 +0000 UTC m=+0.525587961 container died fac0f11788642d7517b3d847a2a2684bcb0162d9fbef3c96836251b786d16f74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_saha, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 05:19:09 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 4.1 MiB/s wr, 46 op/s
Jan 31 05:19:09 np0005603787 systemd[1]: var-lib-containers-storage-overlay-f775d70bb006afa33ea24743ade4941ead431c150cdbd865d68956577efb4754-merged.mount: Deactivated successfully.
Jan 31 05:19:10 np0005603787 podman[242514]: 2026-01-31 10:19:10.016182998 +0000 UTC m=+0.994995617 container remove fac0f11788642d7517b3d847a2a2684bcb0162d9fbef3c96836251b786d16f74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 05:19:10 np0005603787 systemd[1]: libpod-conmon-fac0f11788642d7517b3d847a2a2684bcb0162d9fbef3c96836251b786d16f74.scope: Deactivated successfully.
Jan 31 05:19:10 np0005603787 podman[242555]: 2026-01-31 10:19:10.222258384 +0000 UTC m=+0.076882879 container create e776054a38a77e6bc0337bf95a89aff297674889e2c0f78943bff5ad4b180f0a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_antonelli, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:19:10 np0005603787 podman[242555]: 2026-01-31 10:19:10.170665756 +0000 UTC m=+0.025290301 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:19:10 np0005603787 systemd[1]: Started libpod-conmon-e776054a38a77e6bc0337bf95a89aff297674889e2c0f78943bff5ad4b180f0a.scope.
Jan 31 05:19:10 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:19:10 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/481d479b5289a9a74d4053fff8bd582e9073fe2972ba56db97b8c89afe6ca150/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:19:10 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/481d479b5289a9a74d4053fff8bd582e9073fe2972ba56db97b8c89afe6ca150/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:19:10 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/481d479b5289a9a74d4053fff8bd582e9073fe2972ba56db97b8c89afe6ca150/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:19:10 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/481d479b5289a9a74d4053fff8bd582e9073fe2972ba56db97b8c89afe6ca150/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:19:10 np0005603787 podman[242555]: 2026-01-31 10:19:10.593799049 +0000 UTC m=+0.448423604 container init e776054a38a77e6bc0337bf95a89aff297674889e2c0f78943bff5ad4b180f0a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_antonelli, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 05:19:10 np0005603787 podman[242555]: 2026-01-31 10:19:10.600694188 +0000 UTC m=+0.455318713 container start e776054a38a77e6bc0337bf95a89aff297674889e2c0f78943bff5ad4b180f0a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_antonelli, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 05:19:10 np0005603787 podman[242555]: 2026-01-31 10:19:10.682550032 +0000 UTC m=+0.537174567 container attach e776054a38a77e6bc0337bf95a89aff297674889e2c0f78943bff5ad4b180f0a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]: {
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:    "0": [
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:        {
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:            "devices": [
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "/dev/loop3"
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:            ],
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:            "lv_name": "ceph_lv0",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:            "lv_size": "21470642176",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:            "name": "ceph_lv0",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:            "tags": {
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "ceph.cluster_name": "ceph",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "ceph.crush_device_class": "",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "ceph.encrypted": "0",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "ceph.objectstore": "bluestore",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "ceph.osd_id": "0",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "ceph.type": "block",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "ceph.vdo": "0",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "ceph.with_tpm": "0"
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:            },
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:            "type": "block",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:            "vg_name": "ceph_vg0"
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:        }
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:    ],
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:    "1": [
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:        {
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:            "devices": [
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "/dev/loop4"
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:            ],
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:            "lv_name": "ceph_lv1",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:            "lv_size": "21470642176",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:            "name": "ceph_lv1",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:            "tags": {
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "ceph.cluster_name": "ceph",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "ceph.crush_device_class": "",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "ceph.encrypted": "0",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "ceph.objectstore": "bluestore",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "ceph.osd_id": "1",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "ceph.type": "block",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "ceph.vdo": "0",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "ceph.with_tpm": "0"
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:            },
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:            "type": "block",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:            "vg_name": "ceph_vg1"
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:        }
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:    ],
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:    "2": [
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:        {
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:            "devices": [
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "/dev/loop5"
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:            ],
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:            "lv_name": "ceph_lv2",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:            "lv_size": "21470642176",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:            "name": "ceph_lv2",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:            "tags": {
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "ceph.cluster_name": "ceph",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "ceph.crush_device_class": "",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "ceph.encrypted": "0",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "ceph.objectstore": "bluestore",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "ceph.osd_id": "2",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "ceph.type": "block",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "ceph.vdo": "0",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:                "ceph.with_tpm": "0"
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:            },
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:            "type": "block",
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:            "vg_name": "ceph_vg2"
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:        }
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]:    ]
Jan 31 05:19:10 np0005603787 tender_antonelli[242572]: }
Jan 31 05:19:10 np0005603787 systemd[1]: libpod-e776054a38a77e6bc0337bf95a89aff297674889e2c0f78943bff5ad4b180f0a.scope: Deactivated successfully.
Jan 31 05:19:10 np0005603787 podman[242555]: 2026-01-31 10:19:10.866217386 +0000 UTC m=+0.720841921 container died e776054a38a77e6bc0337bf95a89aff297674889e2c0f78943bff5ad4b180f0a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_antonelli, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:19:10 np0005603787 systemd[1]: var-lib-containers-storage-overlay-481d479b5289a9a74d4053fff8bd582e9073fe2972ba56db97b8c89afe6ca150-merged.mount: Deactivated successfully.
Jan 31 05:19:11 np0005603787 podman[242555]: 2026-01-31 10:19:11.280534588 +0000 UTC m=+1.135159083 container remove e776054a38a77e6bc0337bf95a89aff297674889e2c0f78943bff5ad4b180f0a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_antonelli, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 05:19:11 np0005603787 systemd[1]: libpod-conmon-e776054a38a77e6bc0337bf95a89aff297674889e2c0f78943bff5ad4b180f0a.scope: Deactivated successfully.
Jan 31 05:19:11 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.0 MiB/s wr, 23 op/s
Jan 31 05:19:11 np0005603787 podman[242656]: 2026-01-31 10:19:11.767103943 +0000 UTC m=+0.039719526 container create c115473688f247dae1af356f08586a2d11cbc39829ba8885f2e9e532ae6ad31d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_edison, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 05:19:11 np0005603787 systemd[1]: Started libpod-conmon-c115473688f247dae1af356f08586a2d11cbc39829ba8885f2e9e532ae6ad31d.scope.
Jan 31 05:19:11 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:19:11 np0005603787 podman[242656]: 2026-01-31 10:19:11.749502522 +0000 UTC m=+0.022118115 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:19:11 np0005603787 podman[242656]: 2026-01-31 10:19:11.850270964 +0000 UTC m=+0.122886587 container init c115473688f247dae1af356f08586a2d11cbc39829ba8885f2e9e532ae6ad31d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_edison, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 05:19:11 np0005603787 podman[242656]: 2026-01-31 10:19:11.856242516 +0000 UTC m=+0.128858079 container start c115473688f247dae1af356f08586a2d11cbc39829ba8885f2e9e532ae6ad31d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 05:19:11 np0005603787 podman[242656]: 2026-01-31 10:19:11.860504204 +0000 UTC m=+0.133119817 container attach c115473688f247dae1af356f08586a2d11cbc39829ba8885f2e9e532ae6ad31d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_edison, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:19:11 np0005603787 hardcore_edison[242672]: 167 167
Jan 31 05:19:11 np0005603787 systemd[1]: libpod-c115473688f247dae1af356f08586a2d11cbc39829ba8885f2e9e532ae6ad31d.scope: Deactivated successfully.
Jan 31 05:19:11 np0005603787 podman[242656]: 2026-01-31 10:19:11.862328503 +0000 UTC m=+0.134944076 container died c115473688f247dae1af356f08586a2d11cbc39829ba8885f2e9e532ae6ad31d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_edison, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True)
Jan 31 05:19:11 np0005603787 systemd[1]: var-lib-containers-storage-overlay-26c9c783e1a0188051944d3c47361e05789ea262b57bed28f03159cd6ab3f654-merged.mount: Deactivated successfully.
Jan 31 05:19:11 np0005603787 podman[242656]: 2026-01-31 10:19:11.906407436 +0000 UTC m=+0.179022999 container remove c115473688f247dae1af356f08586a2d11cbc39829ba8885f2e9e532ae6ad31d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_edison, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 05:19:11 np0005603787 systemd[1]: libpod-conmon-c115473688f247dae1af356f08586a2d11cbc39829ba8885f2e9e532ae6ad31d.scope: Deactivated successfully.
Jan 31 05:19:12 np0005603787 podman[242695]: 2026-01-31 10:19:12.053333128 +0000 UTC m=+0.046931592 container create 858d0412a3b36b34950611f2b9ce91a9125a983bceea48ff14954fb928d27225 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:19:12 np0005603787 systemd[1]: Started libpod-conmon-858d0412a3b36b34950611f2b9ce91a9125a983bceea48ff14954fb928d27225.scope.
Jan 31 05:19:12 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:19:12 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dc1c606085f5059773e99414572ccaa95a16b3d96f8768187970101c1a5f300/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:19:12 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dc1c606085f5059773e99414572ccaa95a16b3d96f8768187970101c1a5f300/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:19:12 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dc1c606085f5059773e99414572ccaa95a16b3d96f8768187970101c1a5f300/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:19:12 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dc1c606085f5059773e99414572ccaa95a16b3d96f8768187970101c1a5f300/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:19:12 np0005603787 podman[242695]: 2026-01-31 10:19:12.034117014 +0000 UTC m=+0.027715488 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:19:12 np0005603787 podman[242695]: 2026-01-31 10:19:12.150312805 +0000 UTC m=+0.143911289 container init 858d0412a3b36b34950611f2b9ce91a9125a983bceea48ff14954fb928d27225 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True)
Jan 31 05:19:12 np0005603787 podman[242695]: 2026-01-31 10:19:12.156990968 +0000 UTC m=+0.150589432 container start 858d0412a3b36b34950611f2b9ce91a9125a983bceea48ff14954fb928d27225 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_shockley, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Jan 31 05:19:12 np0005603787 podman[242695]: 2026-01-31 10:19:12.164176544 +0000 UTC m=+0.157775058 container attach 858d0412a3b36b34950611f2b9ce91a9125a983bceea48ff14954fb928d27225 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_shockley, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:19:12 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Jan 31 05:19:12 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Jan 31 05:19:12 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Jan 31 05:19:12 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Jan 31 05:19:12 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:19:12.578922) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 05:19:12 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Jan 31 05:19:12 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769854752578953, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1537, "num_deletes": 251, "total_data_size": 2471172, "memory_usage": 2522064, "flush_reason": "Manual Compaction"}
Jan 31 05:19:12 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Jan 31 05:19:12 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769854752609939, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2425606, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14915, "largest_seqno": 16451, "table_properties": {"data_size": 2418358, "index_size": 4255, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14772, "raw_average_key_size": 19, "raw_value_size": 2403845, "raw_average_value_size": 3230, "num_data_blocks": 193, "num_entries": 744, "num_filter_entries": 744, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769854598, "oldest_key_time": 1769854598, "file_creation_time": 1769854752, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a9067dea-12fb-43c2-8d5c-dbf66227f0e8", "db_session_id": "EXKALWXQ4I64EWVUMKE5", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:19:12 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 31063 microseconds, and 4148 cpu microseconds.
Jan 31 05:19:12 np0005603787 ceph-mon[75160]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 05:19:12 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:19:12.609982) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2425606 bytes OK
Jan 31 05:19:12 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:19:12.609998) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Jan 31 05:19:12 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:19:12.614364) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Jan 31 05:19:12 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:19:12.614380) EVENT_LOG_v1 {"time_micros": 1769854752614376, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 05:19:12 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:19:12.614398) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 05:19:12 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2464434, prev total WAL file size 2464434, number of live WAL files 2.
Jan 31 05:19:12 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:19:12 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:19:12.614837) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Jan 31 05:19:12 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 05:19:12 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2368KB)], [35(7181KB)]
Jan 31 05:19:12 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769854752614893, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9779426, "oldest_snapshot_seqno": -1}
Jan 31 05:19:12 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4085 keys, 7970798 bytes, temperature: kUnknown
Jan 31 05:19:12 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769854752680357, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7970798, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7941130, "index_size": 18358, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10245, "raw_key_size": 99786, "raw_average_key_size": 24, "raw_value_size": 7864830, "raw_average_value_size": 1925, "num_data_blocks": 775, "num_entries": 4085, "num_filter_entries": 4085, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769853439, "oldest_key_time": 0, "file_creation_time": 1769854752, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a9067dea-12fb-43c2-8d5c-dbf66227f0e8", "db_session_id": "EXKALWXQ4I64EWVUMKE5", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:19:12 np0005603787 ceph-mon[75160]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 05:19:12 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:19:12.680556) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7970798 bytes
Jan 31 05:19:12 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:19:12.685114) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 149.3 rd, 121.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 7.0 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(7.3) write-amplify(3.3) OK, records in: 4603, records dropped: 518 output_compression: NoCompression
Jan 31 05:19:12 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:19:12.685134) EVENT_LOG_v1 {"time_micros": 1769854752685125, "job": 16, "event": "compaction_finished", "compaction_time_micros": 65515, "compaction_time_cpu_micros": 12340, "output_level": 6, "num_output_files": 1, "total_output_size": 7970798, "num_input_records": 4603, "num_output_records": 4085, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 05:19:12 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:19:12 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769854752685498, "job": 16, "event": "table_file_deletion", "file_number": 37}
Jan 31 05:19:12 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:19:12 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769854752686286, "job": 16, "event": "table_file_deletion", "file_number": 35}
Jan 31 05:19:12 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:19:12.614783) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:19:12 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:19:12.686310) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:19:12 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:19:12.686315) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:19:12 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:19:12.686316) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:19:12 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:19:12.686318) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:19:12 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:19:12.686320) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:19:12 np0005603787 lvm[242790]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:19:12 np0005603787 lvm[242790]: VG ceph_vg0 finished
Jan 31 05:19:12 np0005603787 lvm[242791]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:19:12 np0005603787 lvm[242791]: VG ceph_vg1 finished
Jan 31 05:19:12 np0005603787 lvm[242793]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:19:12 np0005603787 lvm[242793]: VG ceph_vg2 finished
Jan 31 05:19:12 np0005603787 exciting_shockley[242712]: {}
Jan 31 05:19:12 np0005603787 systemd[1]: libpod-858d0412a3b36b34950611f2b9ce91a9125a983bceea48ff14954fb928d27225.scope: Deactivated successfully.
Jan 31 05:19:12 np0005603787 systemd[1]: libpod-858d0412a3b36b34950611f2b9ce91a9125a983bceea48ff14954fb928d27225.scope: Consumed 1.105s CPU time.
Jan 31 05:19:12 np0005603787 podman[242695]: 2026-01-31 10:19:12.903153711 +0000 UTC m=+0.896752175 container died 858d0412a3b36b34950611f2b9ce91a9125a983bceea48ff14954fb928d27225 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:19:12 np0005603787 systemd[1]: var-lib-containers-storage-overlay-5dc1c606085f5059773e99414572ccaa95a16b3d96f8768187970101c1a5f300-merged.mount: Deactivated successfully.
Jan 31 05:19:12 np0005603787 podman[242695]: 2026-01-31 10:19:12.952763976 +0000 UTC m=+0.946362480 container remove 858d0412a3b36b34950611f2b9ce91a9125a983bceea48ff14954fb928d27225 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_shockley, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:19:12 np0005603787 systemd[1]: libpod-conmon-858d0412a3b36b34950611f2b9ce91a9125a983bceea48ff14954fb928d27225.scope: Deactivated successfully.
Jan 31 05:19:12 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:19:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:19:13 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:19:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:19:13 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:19:13 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:19:13 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:19:13 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 895 B/s wr, 17 op/s
Jan 31 05:19:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:19:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:19:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:19:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:19:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:19:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:19:15 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 29 MiB data, 165 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.5 KiB/s wr, 26 op/s
Jan 31 05:19:15 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Jan 31 05:19:15 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Jan 31 05:19:15 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Jan 31 05:19:17 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 29 MiB data, 165 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 31 05:19:17 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:19:17 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Jan 31 05:19:18 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Jan 31 05:19:18 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Jan 31 05:19:19 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 4.0 KiB/s wr, 70 op/s
Jan 31 05:19:21 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 2.7 KiB/s wr, 45 op/s
Jan 31 05:19:21 np0005603787 podman[242837]: 2026-01-31 10:19:21.858106358 +0000 UTC m=+0.075496072 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 05:19:21 np0005603787 podman[242836]: 2026-01-31 10:19:21.879974425 +0000 UTC m=+0.100732911 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 05:19:23 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:19:23 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Jan 31 05:19:23 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Jan 31 05:19:23 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Jan 31 05:19:23 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.1 KiB/s wr, 37 op/s
Jan 31 05:19:25 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.1 KiB/s wr, 37 op/s
Jan 31 05:19:27 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 641 B/s wr, 20 op/s
Jan 31 05:19:28 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:19:29 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s
Jan 31 05:19:31 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Jan 31 05:19:33 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:19:33 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 47 op/s
Jan 31 05:19:35 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 05:19:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:19:37.060 154765 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:19:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:19:37.061 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:19:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:19:37.061 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:19:37 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 05:19:38 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:19:39 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 05:19:41 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 58 op/s
Jan 31 05:19:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:19:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_10:19:43
Jan 31 05:19:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:19:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] do_upmap
Jan 31 05:19:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', 'backups', 'vms', '.mgr', '.rgw.root', 'images', 'default.rgw.control']
Jan 31 05:19:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:19:43 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 39 op/s
Jan 31 05:19:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:19:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:19:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:19:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:19:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:19:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:19:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:19:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:19:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:19:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:19:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:19:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:19:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:19:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:19:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:19:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:19:45 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 17 op/s
Jan 31 05:19:47 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:19:48 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:19:49 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:19:51 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:19:52 np0005603787 podman[242884]: 2026-01-31 10:19:52.831720333 +0000 UTC m=+0.047662973 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 31 05:19:52 np0005603787 podman[242883]: 2026-01-31 10:19:52.897950701 +0000 UTC m=+0.115136855 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 31 05:19:53 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:19:53 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:19:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:19:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:19:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:19:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:19:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:19:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:19:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:19:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:19:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:19:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:19:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.8805523531010136e-07 of space, bias 1.0, pg target 5.641657059303041e-05 quantized to 32 (current 32)
Jan 31 05:19:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:19:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1278967097563662e-06 of space, bias 4.0, pg target 0.0013534760517076394 quantized to 16 (current 16)
Jan 31 05:19:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:19:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:19:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:19:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:19:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:19:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 05:19:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:19:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:19:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:19:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:19:55 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:19:57 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:19:58 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:19:59 np0005603787 nova_compute[238603]: 2026-01-31 10:19:59.099 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:19:59 np0005603787 nova_compute[238603]: 2026-01-31 10:19:59.115 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:19:59 np0005603787 nova_compute[238603]: 2026-01-31 10:19:59.115 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:19:59 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:20:01 np0005603787 nova_compute[238603]: 2026-01-31 10:20:01.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:20:01 np0005603787 nova_compute[238603]: 2026-01-31 10:20:01.104 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 05:20:01 np0005603787 nova_compute[238603]: 2026-01-31 10:20:01.104 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 05:20:01 np0005603787 nova_compute[238603]: 2026-01-31 10:20:01.139 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 05:20:01 np0005603787 nova_compute[238603]: 2026-01-31 10:20:01.140 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:20:01 np0005603787 nova_compute[238603]: 2026-01-31 10:20:01.163 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:20:01 np0005603787 nova_compute[238603]: 2026-01-31 10:20:01.163 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:20:01 np0005603787 nova_compute[238603]: 2026-01-31 10:20:01.163 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:20:01 np0005603787 nova_compute[238603]: 2026-01-31 10:20:01.164 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 05:20:01 np0005603787 nova_compute[238603]: 2026-01-31 10:20:01.164 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:20:01 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:20:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:20:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1768812256' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:20:01 np0005603787 nova_compute[238603]: 2026-01-31 10:20:01.669 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:20:01 np0005603787 nova_compute[238603]: 2026-01-31 10:20:01.844 238607 WARNING nova.virt.libvirt.driver [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 05:20:01 np0005603787 nova_compute[238603]: 2026-01-31 10:20:01.845 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5145MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 05:20:01 np0005603787 nova_compute[238603]: 2026-01-31 10:20:01.846 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:20:01 np0005603787 nova_compute[238603]: 2026-01-31 10:20:01.846 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:20:01 np0005603787 nova_compute[238603]: 2026-01-31 10:20:01.937 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 05:20:01 np0005603787 nova_compute[238603]: 2026-01-31 10:20:01.937 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 05:20:01 np0005603787 nova_compute[238603]: 2026-01-31 10:20:01.956 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:20:02 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:20:02 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1139444309' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:20:02 np0005603787 nova_compute[238603]: 2026-01-31 10:20:02.470 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:20:02 np0005603787 nova_compute[238603]: 2026-01-31 10:20:02.476 238607 DEBUG nova.compute.provider_tree [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed in ProviderTree for provider: 207962d2-1ba9-4db2-8533-2a30e7131f3e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 05:20:02 np0005603787 nova_compute[238603]: 2026-01-31 10:20:02.496 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed for provider 207962d2-1ba9-4db2-8533-2a30e7131f3e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 05:20:02 np0005603787 nova_compute[238603]: 2026-01-31 10:20:02.499 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 05:20:02 np0005603787 nova_compute[238603]: 2026-01-31 10:20:02.499 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:20:03 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:20:03 np0005603787 nova_compute[238603]: 2026-01-31 10:20:03.462 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:20:03 np0005603787 nova_compute[238603]: 2026-01-31 10:20:03.462 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:20:03 np0005603787 nova_compute[238603]: 2026-01-31 10:20:03.463 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:20:03 np0005603787 nova_compute[238603]: 2026-01-31 10:20:03.463 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:20:03 np0005603787 nova_compute[238603]: 2026-01-31 10:20:03.463 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 05:20:03 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:20:05 np0005603787 nova_compute[238603]: 2026-01-31 10:20:05.103 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:20:05 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:20:07 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:20:08 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:20:09 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:20:11 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:20:12 np0005603787 ceph-osd[87996]: bluestore.MempoolThread fragmentation_score=0.000143 took=0.000032s
Jan 31 05:20:12 np0005603787 ceph-osd[85879]: bluestore.MempoolThread fragmentation_score=0.000140 took=0.000039s
Jan 31 05:20:12 np0005603787 ceph-osd[86934]: bluestore.MempoolThread fragmentation_score=0.000142 took=0.000293s
Jan 31 05:20:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:20:13 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:20:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 31 05:20:13 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 05:20:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:20:13 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:20:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:20:13 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:20:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:20:13 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:20:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:20:13 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:20:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:20:13 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:20:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:20:13 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:20:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:20:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:20:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:20:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:20:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:20:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:20:13 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 05:20:13 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:20:13 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:20:13 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:20:14 np0005603787 podman[243113]: 2026-01-31 10:20:14.035590812 +0000 UTC m=+0.046803389 container create f8558ca53e1d17ac40598451511cafb35b90a1e08ad32be14238f9a1459573f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_margulis, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 05:20:14 np0005603787 systemd[1]: Started libpod-conmon-f8558ca53e1d17ac40598451511cafb35b90a1e08ad32be14238f9a1459573f8.scope.
Jan 31 05:20:14 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:20:14 np0005603787 podman[243113]: 2026-01-31 10:20:14.011631118 +0000 UTC m=+0.022843775 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:20:14 np0005603787 podman[243113]: 2026-01-31 10:20:14.116329486 +0000 UTC m=+0.127542103 container init f8558ca53e1d17ac40598451511cafb35b90a1e08ad32be14238f9a1459573f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_margulis, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 05:20:14 np0005603787 podman[243113]: 2026-01-31 10:20:14.125321702 +0000 UTC m=+0.136534289 container start f8558ca53e1d17ac40598451511cafb35b90a1e08ad32be14238f9a1459573f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 05:20:14 np0005603787 podman[243113]: 2026-01-31 10:20:14.129178678 +0000 UTC m=+0.140391265 container attach f8558ca53e1d17ac40598451511cafb35b90a1e08ad32be14238f9a1459573f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:20:14 np0005603787 nervous_margulis[243129]: 167 167
Jan 31 05:20:14 np0005603787 systemd[1]: libpod-f8558ca53e1d17ac40598451511cafb35b90a1e08ad32be14238f9a1459573f8.scope: Deactivated successfully.
Jan 31 05:20:14 np0005603787 podman[243113]: 2026-01-31 10:20:14.132240251 +0000 UTC m=+0.143452918 container died f8558ca53e1d17ac40598451511cafb35b90a1e08ad32be14238f9a1459573f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_margulis, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 05:20:14 np0005603787 systemd[1]: var-lib-containers-storage-overlay-a8e2b6670d9a13950062d788b0833f1954c18f2957c3d8e2e0f1be474ad44b2d-merged.mount: Deactivated successfully.
Jan 31 05:20:14 np0005603787 podman[243113]: 2026-01-31 10:20:14.177530377 +0000 UTC m=+0.188742954 container remove f8558ca53e1d17ac40598451511cafb35b90a1e08ad32be14238f9a1459573f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_margulis, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:20:14 np0005603787 systemd[1]: libpod-conmon-f8558ca53e1d17ac40598451511cafb35b90a1e08ad32be14238f9a1459573f8.scope: Deactivated successfully.
Jan 31 05:20:14 np0005603787 podman[243154]: 2026-01-31 10:20:14.347896899 +0000 UTC m=+0.056781852 container create 76d09040a5ce45706c99a262f701c822470d1f49a890e0708d19902ebedcf01d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mirzakhani, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:20:14 np0005603787 systemd[1]: Started libpod-conmon-76d09040a5ce45706c99a262f701c822470d1f49a890e0708d19902ebedcf01d.scope.
Jan 31 05:20:14 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:20:14 np0005603787 podman[243154]: 2026-01-31 10:20:14.325444396 +0000 UTC m=+0.034329399 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:20:14 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9af33c876fab637824d08053efde3d54d68a56a0240b7e758ba5249459b96feb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:20:14 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9af33c876fab637824d08053efde3d54d68a56a0240b7e758ba5249459b96feb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:20:14 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9af33c876fab637824d08053efde3d54d68a56a0240b7e758ba5249459b96feb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:20:14 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9af33c876fab637824d08053efde3d54d68a56a0240b7e758ba5249459b96feb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:20:14 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9af33c876fab637824d08053efde3d54d68a56a0240b7e758ba5249459b96feb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:20:14 np0005603787 podman[243154]: 2026-01-31 10:20:14.436981551 +0000 UTC m=+0.145866494 container init 76d09040a5ce45706c99a262f701c822470d1f49a890e0708d19902ebedcf01d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:20:14 np0005603787 podman[243154]: 2026-01-31 10:20:14.443482679 +0000 UTC m=+0.152367622 container start 76d09040a5ce45706c99a262f701c822470d1f49a890e0708d19902ebedcf01d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 31 05:20:14 np0005603787 podman[243154]: 2026-01-31 10:20:14.447059317 +0000 UTC m=+0.155944310 container attach 76d09040a5ce45706c99a262f701c822470d1f49a890e0708d19902ebedcf01d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mirzakhani, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 05:20:14 np0005603787 bold_mirzakhani[243172]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:20:14 np0005603787 bold_mirzakhani[243172]: --> All data devices are unavailable
Jan 31 05:20:14 np0005603787 systemd[1]: libpod-76d09040a5ce45706c99a262f701c822470d1f49a890e0708d19902ebedcf01d.scope: Deactivated successfully.
Jan 31 05:20:14 np0005603787 podman[243154]: 2026-01-31 10:20:14.867845735 +0000 UTC m=+0.576730708 container died 76d09040a5ce45706c99a262f701c822470d1f49a890e0708d19902ebedcf01d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mirzakhani, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:20:14 np0005603787 systemd[1]: var-lib-containers-storage-overlay-9af33c876fab637824d08053efde3d54d68a56a0240b7e758ba5249459b96feb-merged.mount: Deactivated successfully.
Jan 31 05:20:14 np0005603787 podman[243154]: 2026-01-31 10:20:14.913593634 +0000 UTC m=+0.622478567 container remove 76d09040a5ce45706c99a262f701c822470d1f49a890e0708d19902ebedcf01d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mirzakhani, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 05:20:14 np0005603787 systemd[1]: libpod-conmon-76d09040a5ce45706c99a262f701c822470d1f49a890e0708d19902ebedcf01d.scope: Deactivated successfully.
Jan 31 05:20:15 np0005603787 podman[243268]: 2026-01-31 10:20:15.350534674 +0000 UTC m=+0.045150183 container create aaa2cfbb402df63d5ef363849274edcabc29fb925c28ab67e2a7ca334227273b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_kilby, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:20:15 np0005603787 systemd[1]: Started libpod-conmon-aaa2cfbb402df63d5ef363849274edcabc29fb925c28ab67e2a7ca334227273b.scope.
Jan 31 05:20:15 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:20:15 np0005603787 podman[243268]: 2026-01-31 10:20:15.416126385 +0000 UTC m=+0.110741924 container init aaa2cfbb402df63d5ef363849274edcabc29fb925c28ab67e2a7ca334227273b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 05:20:15 np0005603787 podman[243268]: 2026-01-31 10:20:15.422788276 +0000 UTC m=+0.117403785 container start aaa2cfbb402df63d5ef363849274edcabc29fb925c28ab67e2a7ca334227273b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:20:15 np0005603787 exciting_kilby[243282]: 167 167
Jan 31 05:20:15 np0005603787 systemd[1]: libpod-aaa2cfbb402df63d5ef363849274edcabc29fb925c28ab67e2a7ca334227273b.scope: Deactivated successfully.
Jan 31 05:20:15 np0005603787 podman[243268]: 2026-01-31 10:20:15.333413377 +0000 UTC m=+0.028028916 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:20:15 np0005603787 podman[243268]: 2026-01-31 10:20:15.429268534 +0000 UTC m=+0.123884063 container attach aaa2cfbb402df63d5ef363849274edcabc29fb925c28ab67e2a7ca334227273b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_kilby, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 05:20:15 np0005603787 podman[243268]: 2026-01-31 10:20:15.430197349 +0000 UTC m=+0.124812858 container died aaa2cfbb402df63d5ef363849274edcabc29fb925c28ab67e2a7ca334227273b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_kilby, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 31 05:20:15 np0005603787 systemd[1]: var-lib-containers-storage-overlay-fa0d871c916b477d485f90ccdc549fa946fc6a4451efbb7134e46bc7bd680ded-merged.mount: Deactivated successfully.
Jan 31 05:20:15 np0005603787 podman[243268]: 2026-01-31 10:20:15.479270139 +0000 UTC m=+0.173885658 container remove aaa2cfbb402df63d5ef363849274edcabc29fb925c28ab67e2a7ca334227273b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_kilby, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 05:20:15 np0005603787 systemd[1]: libpod-conmon-aaa2cfbb402df63d5ef363849274edcabc29fb925c28ab67e2a7ca334227273b.scope: Deactivated successfully.
Jan 31 05:20:15 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:20:15 np0005603787 podman[243306]: 2026-01-31 10:20:15.636371308 +0000 UTC m=+0.050262743 container create 89faabb3fdfbdfb07d933a1c2170cac9ac5363922adf49d0585758c7079d8c22 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 05:20:15 np0005603787 systemd[1]: Started libpod-conmon-89faabb3fdfbdfb07d933a1c2170cac9ac5363922adf49d0585758c7079d8c22.scope.
Jan 31 05:20:15 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:20:15 np0005603787 podman[243306]: 2026-01-31 10:20:15.614192023 +0000 UTC m=+0.028083478 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:20:15 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1ffbdc6f5211499f995f94aa7db3c2c8c5523e5392b1190c17720a051a0b2e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:20:15 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1ffbdc6f5211499f995f94aa7db3c2c8c5523e5392b1190c17720a051a0b2e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:20:15 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1ffbdc6f5211499f995f94aa7db3c2c8c5523e5392b1190c17720a051a0b2e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:20:15 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1ffbdc6f5211499f995f94aa7db3c2c8c5523e5392b1190c17720a051a0b2e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:20:15 np0005603787 podman[243306]: 2026-01-31 10:20:15.731855685 +0000 UTC m=+0.145747140 container init 89faabb3fdfbdfb07d933a1c2170cac9ac5363922adf49d0585758c7079d8c22 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_booth, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:20:15 np0005603787 podman[243306]: 2026-01-31 10:20:15.740367837 +0000 UTC m=+0.154259282 container start 89faabb3fdfbdfb07d933a1c2170cac9ac5363922adf49d0585758c7079d8c22 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_booth, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 05:20:15 np0005603787 podman[243306]: 2026-01-31 10:20:15.743853953 +0000 UTC m=+0.157745428 container attach 89faabb3fdfbdfb07d933a1c2170cac9ac5363922adf49d0585758c7079d8c22 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 05:20:16 np0005603787 awesome_booth[243322]: {
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:    "0": [
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:        {
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:            "devices": [
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "/dev/loop3"
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:            ],
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:            "lv_name": "ceph_lv0",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:            "lv_size": "21470642176",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:            "name": "ceph_lv0",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:            "tags": {
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "ceph.cluster_name": "ceph",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "ceph.crush_device_class": "",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "ceph.encrypted": "0",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "ceph.objectstore": "bluestore",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "ceph.osd_id": "0",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "ceph.type": "block",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "ceph.vdo": "0",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "ceph.with_tpm": "0"
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:            },
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:            "type": "block",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:            "vg_name": "ceph_vg0"
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:        }
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:    ],
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:    "1": [
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:        {
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:            "devices": [
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "/dev/loop4"
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:            ],
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:            "lv_name": "ceph_lv1",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:            "lv_size": "21470642176",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:            "name": "ceph_lv1",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:            "tags": {
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "ceph.cluster_name": "ceph",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "ceph.crush_device_class": "",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "ceph.encrypted": "0",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "ceph.objectstore": "bluestore",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "ceph.osd_id": "1",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "ceph.type": "block",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "ceph.vdo": "0",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "ceph.with_tpm": "0"
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:            },
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:            "type": "block",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:            "vg_name": "ceph_vg1"
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:        }
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:    ],
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:    "2": [
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:        {
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:            "devices": [
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "/dev/loop5"
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:            ],
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:            "lv_name": "ceph_lv2",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:            "lv_size": "21470642176",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:            "name": "ceph_lv2",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:            "tags": {
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "ceph.cluster_name": "ceph",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "ceph.crush_device_class": "",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "ceph.encrypted": "0",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "ceph.objectstore": "bluestore",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "ceph.osd_id": "2",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "ceph.type": "block",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "ceph.vdo": "0",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:                "ceph.with_tpm": "0"
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:            },
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:            "type": "block",
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:            "vg_name": "ceph_vg2"
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:        }
Jan 31 05:20:16 np0005603787 awesome_booth[243322]:    ]
Jan 31 05:20:16 np0005603787 awesome_booth[243322]: }
Jan 31 05:20:16 np0005603787 systemd[1]: libpod-89faabb3fdfbdfb07d933a1c2170cac9ac5363922adf49d0585758c7079d8c22.scope: Deactivated successfully.
Jan 31 05:20:16 np0005603787 podman[243306]: 2026-01-31 10:20:16.059947384 +0000 UTC m=+0.473838819 container died 89faabb3fdfbdfb07d933a1c2170cac9ac5363922adf49d0585758c7079d8c22 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_booth, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 05:20:16 np0005603787 systemd[1]: var-lib-containers-storage-overlay-c1ffbdc6f5211499f995f94aa7db3c2c8c5523e5392b1190c17720a051a0b2e9-merged.mount: Deactivated successfully.
Jan 31 05:20:16 np0005603787 podman[243306]: 2026-01-31 10:20:16.100586503 +0000 UTC m=+0.514477938 container remove 89faabb3fdfbdfb07d933a1c2170cac9ac5363922adf49d0585758c7079d8c22 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_booth, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:20:16 np0005603787 systemd[1]: libpod-conmon-89faabb3fdfbdfb07d933a1c2170cac9ac5363922adf49d0585758c7079d8c22.scope: Deactivated successfully.
Jan 31 05:20:16 np0005603787 podman[243404]: 2026-01-31 10:20:16.522167494 +0000 UTC m=+0.033149107 container create 0dd92d2239df80dc25b9993edda6bc3c47c232158260e4e6b1235048a922a2ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:20:16 np0005603787 systemd[1]: Started libpod-conmon-0dd92d2239df80dc25b9993edda6bc3c47c232158260e4e6b1235048a922a2ef.scope.
Jan 31 05:20:16 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:20:16 np0005603787 podman[243404]: 2026-01-31 10:20:16.576527367 +0000 UTC m=+0.087509030 container init 0dd92d2239df80dc25b9993edda6bc3c47c232158260e4e6b1235048a922a2ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:20:16 np0005603787 podman[243404]: 2026-01-31 10:20:16.58213609 +0000 UTC m=+0.093117713 container start 0dd92d2239df80dc25b9993edda6bc3c47c232158260e4e6b1235048a922a2ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:20:16 np0005603787 sleepy_beaver[243421]: 167 167
Jan 31 05:20:16 np0005603787 systemd[1]: libpod-0dd92d2239df80dc25b9993edda6bc3c47c232158260e4e6b1235048a922a2ef.scope: Deactivated successfully.
Jan 31 05:20:16 np0005603787 podman[243404]: 2026-01-31 10:20:16.586634164 +0000 UTC m=+0.097615777 container attach 0dd92d2239df80dc25b9993edda6bc3c47c232158260e4e6b1235048a922a2ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_beaver, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:20:16 np0005603787 podman[243404]: 2026-01-31 10:20:16.587281711 +0000 UTC m=+0.098263334 container died 0dd92d2239df80dc25b9993edda6bc3c47c232158260e4e6b1235048a922a2ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 05:20:16 np0005603787 podman[243404]: 2026-01-31 10:20:16.509065585 +0000 UTC m=+0.020047218 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:20:16 np0005603787 systemd[1]: var-lib-containers-storage-overlay-b843ba16abe442d0a527c6b7b6cbd5ef76bc89d5814565278b548254fa66adb7-merged.mount: Deactivated successfully.
Jan 31 05:20:16 np0005603787 podman[243404]: 2026-01-31 10:20:16.627205231 +0000 UTC m=+0.138186844 container remove 0dd92d2239df80dc25b9993edda6bc3c47c232158260e4e6b1235048a922a2ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 05:20:16 np0005603787 systemd[1]: libpod-conmon-0dd92d2239df80dc25b9993edda6bc3c47c232158260e4e6b1235048a922a2ef.scope: Deactivated successfully.
Jan 31 05:20:16 np0005603787 podman[243444]: 2026-01-31 10:20:16.740550996 +0000 UTC m=+0.034279777 container create 1f99d91555155b38bc88408da307e1935e4483fe59e511b215a94aaff01cd4d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_aryabhata, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 05:20:16 np0005603787 systemd[1]: Started libpod-conmon-1f99d91555155b38bc88408da307e1935e4483fe59e511b215a94aaff01cd4d9.scope.
Jan 31 05:20:16 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:20:16 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/919f9d180f9468046e2a27bd7464f63d9367f3fffa2fea937962c0b55588abfb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:20:16 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/919f9d180f9468046e2a27bd7464f63d9367f3fffa2fea937962c0b55588abfb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:20:16 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/919f9d180f9468046e2a27bd7464f63d9367f3fffa2fea937962c0b55588abfb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:20:16 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/919f9d180f9468046e2a27bd7464f63d9367f3fffa2fea937962c0b55588abfb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:20:16 np0005603787 podman[243444]: 2026-01-31 10:20:16.724455846 +0000 UTC m=+0.018184637 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:20:16 np0005603787 podman[243444]: 2026-01-31 10:20:16.828848517 +0000 UTC m=+0.122577478 container init 1f99d91555155b38bc88408da307e1935e4483fe59e511b215a94aaff01cd4d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_aryabhata, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:20:16 np0005603787 podman[243444]: 2026-01-31 10:20:16.835296522 +0000 UTC m=+0.129025333 container start 1f99d91555155b38bc88408da307e1935e4483fe59e511b215a94aaff01cd4d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_aryabhata, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:20:16 np0005603787 podman[243444]: 2026-01-31 10:20:16.839927669 +0000 UTC m=+0.133656480 container attach 1f99d91555155b38bc88408da307e1935e4483fe59e511b215a94aaff01cd4d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_aryabhata, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:20:17 np0005603787 lvm[243537]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:20:17 np0005603787 lvm[243540]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:20:17 np0005603787 lvm[243540]: VG ceph_vg1 finished
Jan 31 05:20:17 np0005603787 lvm[243537]: VG ceph_vg0 finished
Jan 31 05:20:17 np0005603787 lvm[243542]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:20:17 np0005603787 lvm[243542]: VG ceph_vg2 finished
Jan 31 05:20:17 np0005603787 determined_aryabhata[243461]: {}
Jan 31 05:20:17 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:20:17 np0005603787 systemd[1]: libpod-1f99d91555155b38bc88408da307e1935e4483fe59e511b215a94aaff01cd4d9.scope: Deactivated successfully.
Jan 31 05:20:17 np0005603787 systemd[1]: libpod-1f99d91555155b38bc88408da307e1935e4483fe59e511b215a94aaff01cd4d9.scope: Consumed 1.072s CPU time.
Jan 31 05:20:17 np0005603787 podman[243545]: 2026-01-31 10:20:17.677952829 +0000 UTC m=+0.029402123 container died 1f99d91555155b38bc88408da307e1935e4483fe59e511b215a94aaff01cd4d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_aryabhata, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 05:20:17 np0005603787 systemd[1]: var-lib-containers-storage-overlay-919f9d180f9468046e2a27bd7464f63d9367f3fffa2fea937962c0b55588abfb-merged.mount: Deactivated successfully.
Jan 31 05:20:17 np0005603787 podman[243545]: 2026-01-31 10:20:17.721454997 +0000 UTC m=+0.072904221 container remove 1f99d91555155b38bc88408da307e1935e4483fe59e511b215a94aaff01cd4d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_aryabhata, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:20:17 np0005603787 systemd[1]: libpod-conmon-1f99d91555155b38bc88408da307e1935e4483fe59e511b215a94aaff01cd4d9.scope: Deactivated successfully.
Jan 31 05:20:17 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:20:17 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:20:17 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:20:17 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:20:18 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:20:18 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:20:18 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:20:19 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:20:21 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:20:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 05:20:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2742989527' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 05:20:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 05:20:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2742989527' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 05:20:23 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:20:23 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:20:23 np0005603787 podman[243586]: 2026-01-31 10:20:23.869801565 +0000 UTC m=+0.084007894 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Jan 31 05:20:23 np0005603787 podman[243585]: 2026-01-31 10:20:23.869847967 +0000 UTC m=+0.084056796 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 05:20:25 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:20:27 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:20:28 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:20:29 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:20:31 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:20:33 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:20:33 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:20:35 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:20:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:20:37.061 154765 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:20:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:20:37.062 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:20:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:20:37.062 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:20:37 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:20:38 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:20:39 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:20:41 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:20:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:20:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_10:20:43
Jan 31 05:20:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:20:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] do_upmap
Jan 31 05:20:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] pools ['volumes', '.rgw.root', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups', '.mgr', 'default.rgw.control', 'default.rgw.log', 'vms']
Jan 31 05:20:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:20:43 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:20:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:20:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:20:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:20:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:20:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:20:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:20:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:20:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:20:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:20:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:20:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:20:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:20:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:20:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:20:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:20:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:20:45 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:20:47 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:20:48 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:20:49 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:20:51 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:20:53 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:20:53 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:20:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:20:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:20:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:20:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:20:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:20:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:20:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:20:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:20:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:20:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:20:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.8805523531010136e-07 of space, bias 1.0, pg target 5.641657059303041e-05 quantized to 32 (current 32)
Jan 31 05:20:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:20:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1278967097563662e-06 of space, bias 4.0, pg target 0.0013534760517076394 quantized to 16 (current 16)
Jan 31 05:20:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:20:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:20:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:20:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:20:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:20:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 05:20:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:20:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:20:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:20:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:20:54 np0005603787 podman[243632]: 2026-01-31 10:20:54.855124578 +0000 UTC m=+0.075065322 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent)
Jan 31 05:20:54 np0005603787 podman[243631]: 2026-01-31 10:20:54.885731801 +0000 UTC m=+0.105756868 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 05:20:55 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:20:57 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:20:58 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:20:59 np0005603787 nova_compute[238603]: 2026-01-31 10:20:59.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:20:59 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:21:00 np0005603787 nova_compute[238603]: 2026-01-31 10:21:00.103 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:21:01 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:21:02 np0005603787 nova_compute[238603]: 2026-01-31 10:21:02.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:21:02 np0005603787 nova_compute[238603]: 2026-01-31 10:21:02.103 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 05:21:02 np0005603787 nova_compute[238603]: 2026-01-31 10:21:02.103 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 05:21:02 np0005603787 nova_compute[238603]: 2026-01-31 10:21:02.118 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 05:21:03 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:21:03 np0005603787 nova_compute[238603]: 2026-01-31 10:21:03.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:21:03 np0005603787 nova_compute[238603]: 2026-01-31 10:21:03.103 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:21:03 np0005603787 nova_compute[238603]: 2026-01-31 10:21:03.103 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 05:21:03 np0005603787 nova_compute[238603]: 2026-01-31 10:21:03.103 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:21:03 np0005603787 nova_compute[238603]: 2026-01-31 10:21:03.260 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:21:03 np0005603787 nova_compute[238603]: 2026-01-31 10:21:03.261 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:21:03 np0005603787 nova_compute[238603]: 2026-01-31 10:21:03.261 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:21:03 np0005603787 nova_compute[238603]: 2026-01-31 10:21:03.261 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 05:21:03 np0005603787 nova_compute[238603]: 2026-01-31 10:21:03.262 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:21:03 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:21:03 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:21:03 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4202822071' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:21:03 np0005603787 nova_compute[238603]: 2026-01-31 10:21:03.796 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:21:03 np0005603787 nova_compute[238603]: 2026-01-31 10:21:03.934 238607 WARNING nova.virt.libvirt.driver [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 05:21:03 np0005603787 nova_compute[238603]: 2026-01-31 10:21:03.936 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5143MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 05:21:03 np0005603787 nova_compute[238603]: 2026-01-31 10:21:03.936 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:21:03 np0005603787 nova_compute[238603]: 2026-01-31 10:21:03.937 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:21:04 np0005603787 nova_compute[238603]: 2026-01-31 10:21:04.037 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 05:21:04 np0005603787 nova_compute[238603]: 2026-01-31 10:21:04.037 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 05:21:04 np0005603787 nova_compute[238603]: 2026-01-31 10:21:04.060 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:21:04 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:21:04 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2758350292' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:21:04 np0005603787 nova_compute[238603]: 2026-01-31 10:21:04.540 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:21:04 np0005603787 nova_compute[238603]: 2026-01-31 10:21:04.545 238607 DEBUG nova.compute.provider_tree [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed in ProviderTree for provider: 207962d2-1ba9-4db2-8533-2a30e7131f3e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 05:21:04 np0005603787 nova_compute[238603]: 2026-01-31 10:21:04.563 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed for provider 207962d2-1ba9-4db2-8533-2a30e7131f3e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 05:21:04 np0005603787 nova_compute[238603]: 2026-01-31 10:21:04.566 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 05:21:04 np0005603787 nova_compute[238603]: 2026-01-31 10:21:04.567 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:21:05 np0005603787 nova_compute[238603]: 2026-01-31 10:21:05.563 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:21:05 np0005603787 nova_compute[238603]: 2026-01-31 10:21:05.565 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:21:05 np0005603787 nova_compute[238603]: 2026-01-31 10:21:05.565 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:21:05 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:21:07 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:21:08 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:21:09 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:21:11 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:21:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:21:13 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:21:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:21:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:21:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:21:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:21:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:21:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:21:15 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:21:17 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:21:18 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:21:18 np0005603787 podman[243814]: 2026-01-31 10:21:18.412506828 +0000 UTC m=+0.053447355 container exec 1cb6a2ad0c52f65a03512fc45c5f9abf84541c639633c47899a99e7122aa7891 (image=quay.io/ceph/ceph:v20, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 05:21:18 np0005603787 podman[243814]: 2026-01-31 10:21:18.502530218 +0000 UTC m=+0.143470765 container exec_died 1cb6a2ad0c52f65a03512fc45c5f9abf84541c639633c47899a99e7122aa7891 (image=quay.io/ceph/ceph:v20, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mon-compute-0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 05:21:19 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:21:19 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:21:19 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:21:19 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:21:19 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:21:19 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:21:19 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:21:19 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:21:19 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:21:19 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:21:19 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:21:19 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:21:19 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:21:19 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:21:19 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:21:19 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:21:19 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:21:20 np0005603787 podman[244143]: 2026-01-31 10:21:20.244526217 +0000 UTC m=+0.045219061 container create b2becc88e0c980d5fb6c77128056cbae14d0c0ee8f24cd99e7c88bec1a81fed2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 05:21:20 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:21:20 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:21:20 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:21:20 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:21:20 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:21:20 np0005603787 systemd[1]: Started libpod-conmon-b2becc88e0c980d5fb6c77128056cbae14d0c0ee8f24cd99e7c88bec1a81fed2.scope.
Jan 31 05:21:20 np0005603787 podman[244143]: 2026-01-31 10:21:20.220931275 +0000 UTC m=+0.021624199 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:21:20 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:21:20 np0005603787 podman[244143]: 2026-01-31 10:21:20.335123583 +0000 UTC m=+0.135816437 container init b2becc88e0c980d5fb6c77128056cbae14d0c0ee8f24cd99e7c88bec1a81fed2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bartik, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:21:20 np0005603787 podman[244143]: 2026-01-31 10:21:20.3405792 +0000 UTC m=+0.141272014 container start b2becc88e0c980d5fb6c77128056cbae14d0c0ee8f24cd99e7c88bec1a81fed2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bartik, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:21:20 np0005603787 thirsty_bartik[244160]: 167 167
Jan 31 05:21:20 np0005603787 systemd[1]: libpod-b2becc88e0c980d5fb6c77128056cbae14d0c0ee8f24cd99e7c88bec1a81fed2.scope: Deactivated successfully.
Jan 31 05:21:20 np0005603787 podman[244143]: 2026-01-31 10:21:20.346230105 +0000 UTC m=+0.146922929 container attach b2becc88e0c980d5fb6c77128056cbae14d0c0ee8f24cd99e7c88bec1a81fed2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bartik, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 05:21:20 np0005603787 podman[244143]: 2026-01-31 10:21:20.346705698 +0000 UTC m=+0.147398522 container died b2becc88e0c980d5fb6c77128056cbae14d0c0ee8f24cd99e7c88bec1a81fed2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bartik, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:21:20 np0005603787 systemd[1]: var-lib-containers-storage-overlay-9bbefb404102b6131e6e9ca4c7bd3ca898997e15bb03e4d24e091b12ca5a2060-merged.mount: Deactivated successfully.
Jan 31 05:21:20 np0005603787 podman[244143]: 2026-01-31 10:21:20.386870491 +0000 UTC m=+0.187563315 container remove b2becc88e0c980d5fb6c77128056cbae14d0c0ee8f24cd99e7c88bec1a81fed2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:21:20 np0005603787 systemd[1]: libpod-conmon-b2becc88e0c980d5fb6c77128056cbae14d0c0ee8f24cd99e7c88bec1a81fed2.scope: Deactivated successfully.
Jan 31 05:21:20 np0005603787 podman[244182]: 2026-01-31 10:21:20.537988842 +0000 UTC m=+0.053268451 container create c7a4cc7d70997edb06c900d0ae77a7d6883636d80e6565032f715fa635c36e45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_newton, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 05:21:20 np0005603787 systemd[1]: Started libpod-conmon-c7a4cc7d70997edb06c900d0ae77a7d6883636d80e6565032f715fa635c36e45.scope.
Jan 31 05:21:20 np0005603787 podman[244182]: 2026-01-31 10:21:20.513851055 +0000 UTC m=+0.029130734 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:21:20 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:21:20 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/139127e229a109c226d45da6380786535dd8e5f872638194e23129a51deb0112/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:21:20 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/139127e229a109c226d45da6380786535dd8e5f872638194e23129a51deb0112/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:21:20 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/139127e229a109c226d45da6380786535dd8e5f872638194e23129a51deb0112/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:21:20 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/139127e229a109c226d45da6380786535dd8e5f872638194e23129a51deb0112/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:21:20 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/139127e229a109c226d45da6380786535dd8e5f872638194e23129a51deb0112/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:21:20 np0005603787 podman[244182]: 2026-01-31 10:21:20.647315827 +0000 UTC m=+0.162595506 container init c7a4cc7d70997edb06c900d0ae77a7d6883636d80e6565032f715fa635c36e45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_newton, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 05:21:20 np0005603787 podman[244182]: 2026-01-31 10:21:20.664285169 +0000 UTC m=+0.179564808 container start c7a4cc7d70997edb06c900d0ae77a7d6883636d80e6565032f715fa635c36e45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_newton, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 05:21:20 np0005603787 podman[244182]: 2026-01-31 10:21:20.669909382 +0000 UTC m=+0.185189071 container attach c7a4cc7d70997edb06c900d0ae77a7d6883636d80e6565032f715fa635c36e45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_newton, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 05:21:21 np0005603787 boring_newton[244198]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:21:21 np0005603787 boring_newton[244198]: --> All data devices are unavailable
Jan 31 05:21:21 np0005603787 systemd[1]: libpod-c7a4cc7d70997edb06c900d0ae77a7d6883636d80e6565032f715fa635c36e45.scope: Deactivated successfully.
Jan 31 05:21:21 np0005603787 podman[244218]: 2026-01-31 10:21:21.187483484 +0000 UTC m=+0.026574624 container died c7a4cc7d70997edb06c900d0ae77a7d6883636d80e6565032f715fa635c36e45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 05:21:21 np0005603787 systemd[1]: var-lib-containers-storage-overlay-139127e229a109c226d45da6380786535dd8e5f872638194e23129a51deb0112-merged.mount: Deactivated successfully.
Jan 31 05:21:21 np0005603787 podman[244218]: 2026-01-31 10:21:21.223116984 +0000 UTC m=+0.062208104 container remove c7a4cc7d70997edb06c900d0ae77a7d6883636d80e6565032f715fa635c36e45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_newton, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:21:21 np0005603787 systemd[1]: libpod-conmon-c7a4cc7d70997edb06c900d0ae77a7d6883636d80e6565032f715fa635c36e45.scope: Deactivated successfully.
Jan 31 05:21:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 05:21:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2359698195' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 05:21:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 05:21:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2359698195' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 05:21:21 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:21:21 np0005603787 podman[244295]: 2026-01-31 10:21:21.656688491 +0000 UTC m=+0.046670361 container create 635a0f8522a52011ccbd5cb522d1060eddbdbc73f90333252e18c12a45247ac0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 05:21:21 np0005603787 systemd[1]: Started libpod-conmon-635a0f8522a52011ccbd5cb522d1060eddbdbc73f90333252e18c12a45247ac0.scope.
Jan 31 05:21:21 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:21:21 np0005603787 podman[244295]: 2026-01-31 10:21:21.638826986 +0000 UTC m=+0.028808886 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:21:21 np0005603787 podman[244295]: 2026-01-31 10:21:21.740541283 +0000 UTC m=+0.130523163 container init 635a0f8522a52011ccbd5cb522d1060eddbdbc73f90333252e18c12a45247ac0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 05:21:21 np0005603787 podman[244295]: 2026-01-31 10:21:21.748387077 +0000 UTC m=+0.138368937 container start 635a0f8522a52011ccbd5cb522d1060eddbdbc73f90333252e18c12a45247ac0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:21:21 np0005603787 podman[244295]: 2026-01-31 10:21:21.752738905 +0000 UTC m=+0.142720795 container attach 635a0f8522a52011ccbd5cb522d1060eddbdbc73f90333252e18c12a45247ac0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 05:21:21 np0005603787 systemd[1]: libpod-635a0f8522a52011ccbd5cb522d1060eddbdbc73f90333252e18c12a45247ac0.scope: Deactivated successfully.
Jan 31 05:21:21 np0005603787 beautiful_chaum[244311]: 167 167
Jan 31 05:21:21 np0005603787 conmon[244311]: conmon 635a0f8522a52011ccbd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-635a0f8522a52011ccbd5cb522d1060eddbdbc73f90333252e18c12a45247ac0.scope/container/memory.events
Jan 31 05:21:21 np0005603787 podman[244295]: 2026-01-31 10:21:21.755595393 +0000 UTC m=+0.145577253 container died 635a0f8522a52011ccbd5cb522d1060eddbdbc73f90333252e18c12a45247ac0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:21:21 np0005603787 systemd[1]: var-lib-containers-storage-overlay-b832791efabcfabd71265695615b84562abbbe0655ec0f5df14524701cd2481d-merged.mount: Deactivated successfully.
Jan 31 05:21:21 np0005603787 podman[244295]: 2026-01-31 10:21:21.791927561 +0000 UTC m=+0.181909421 container remove 635a0f8522a52011ccbd5cb522d1060eddbdbc73f90333252e18c12a45247ac0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_chaum, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 05:21:21 np0005603787 systemd[1]: libpod-conmon-635a0f8522a52011ccbd5cb522d1060eddbdbc73f90333252e18c12a45247ac0.scope: Deactivated successfully.
Jan 31 05:21:21 np0005603787 podman[244334]: 2026-01-31 10:21:21.932435624 +0000 UTC m=+0.048412968 container create ac1a8259362170917eaf5f40adfdd7d3c69db7929ffd9066e56552db89720ffd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_nightingale, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:21:21 np0005603787 systemd[1]: Started libpod-conmon-ac1a8259362170917eaf5f40adfdd7d3c69db7929ffd9066e56552db89720ffd.scope.
Jan 31 05:21:21 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:21:22 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa9f3c0e7a423987d6ba90b4bbe77bb48110081e3ce93e33f24d0578ac51a2c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:21:22 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa9f3c0e7a423987d6ba90b4bbe77bb48110081e3ce93e33f24d0578ac51a2c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:21:22 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa9f3c0e7a423987d6ba90b4bbe77bb48110081e3ce93e33f24d0578ac51a2c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:21:22 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa9f3c0e7a423987d6ba90b4bbe77bb48110081e3ce93e33f24d0578ac51a2c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:21:22 np0005603787 podman[244334]: 2026-01-31 10:21:21.909321216 +0000 UTC m=+0.025298640 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:21:22 np0005603787 podman[244334]: 2026-01-31 10:21:22.017948741 +0000 UTC m=+0.133926125 container init ac1a8259362170917eaf5f40adfdd7d3c69db7929ffd9066e56552db89720ffd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:21:22 np0005603787 podman[244334]: 2026-01-31 10:21:22.031396637 +0000 UTC m=+0.147374021 container start ac1a8259362170917eaf5f40adfdd7d3c69db7929ffd9066e56552db89720ffd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 05:21:22 np0005603787 podman[244334]: 2026-01-31 10:21:22.035213741 +0000 UTC m=+0.151191115 container attach ac1a8259362170917eaf5f40adfdd7d3c69db7929ffd9066e56552db89720ffd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]: {
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:    "0": [
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:        {
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:            "devices": [
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "/dev/loop3"
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:            ],
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:            "lv_name": "ceph_lv0",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:            "lv_size": "21470642176",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:            "name": "ceph_lv0",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:            "tags": {
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "ceph.cluster_name": "ceph",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "ceph.crush_device_class": "",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "ceph.encrypted": "0",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "ceph.objectstore": "bluestore",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "ceph.osd_id": "0",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "ceph.type": "block",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "ceph.vdo": "0",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "ceph.with_tpm": "0"
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:            },
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:            "type": "block",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:            "vg_name": "ceph_vg0"
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:        }
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:    ],
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:    "1": [
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:        {
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:            "devices": [
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "/dev/loop4"
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:            ],
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:            "lv_name": "ceph_lv1",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:            "lv_size": "21470642176",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:            "name": "ceph_lv1",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:            "tags": {
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "ceph.cluster_name": "ceph",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "ceph.crush_device_class": "",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "ceph.encrypted": "0",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "ceph.objectstore": "bluestore",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "ceph.osd_id": "1",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "ceph.type": "block",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "ceph.vdo": "0",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "ceph.with_tpm": "0"
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:            },
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:            "type": "block",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:            "vg_name": "ceph_vg1"
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:        }
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:    ],
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:    "2": [
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:        {
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:            "devices": [
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "/dev/loop5"
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:            ],
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:            "lv_name": "ceph_lv2",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:            "lv_size": "21470642176",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:            "name": "ceph_lv2",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:            "tags": {
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "ceph.cluster_name": "ceph",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "ceph.crush_device_class": "",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "ceph.encrypted": "0",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "ceph.objectstore": "bluestore",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "ceph.osd_id": "2",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "ceph.type": "block",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "ceph.vdo": "0",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:                "ceph.with_tpm": "0"
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:            },
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:            "type": "block",
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:            "vg_name": "ceph_vg2"
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:        }
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]:    ]
Jan 31 05:21:22 np0005603787 busy_nightingale[244351]: }
Jan 31 05:21:22 np0005603787 systemd[1]: libpod-ac1a8259362170917eaf5f40adfdd7d3c69db7929ffd9066e56552db89720ffd.scope: Deactivated successfully.
Jan 31 05:21:22 np0005603787 podman[244334]: 2026-01-31 10:21:22.365749655 +0000 UTC m=+0.481727039 container died ac1a8259362170917eaf5f40adfdd7d3c69db7929ffd9066e56552db89720ffd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:21:22 np0005603787 systemd[1]: var-lib-containers-storage-overlay-cfa9f3c0e7a423987d6ba90b4bbe77bb48110081e3ce93e33f24d0578ac51a2c-merged.mount: Deactivated successfully.
Jan 31 05:21:22 np0005603787 podman[244334]: 2026-01-31 10:21:22.411264343 +0000 UTC m=+0.527241717 container remove ac1a8259362170917eaf5f40adfdd7d3c69db7929ffd9066e56552db89720ffd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_nightingale, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 05:21:22 np0005603787 systemd[1]: libpod-conmon-ac1a8259362170917eaf5f40adfdd7d3c69db7929ffd9066e56552db89720ffd.scope: Deactivated successfully.
Jan 31 05:21:22 np0005603787 podman[244435]: 2026-01-31 10:21:22.866790378 +0000 UTC m=+0.043221957 container create 25333f73a448ce83bf57d3e316fc3cf4c8ff2cee248ee85eed001804948da76c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_wilson, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 05:21:22 np0005603787 systemd[1]: Started libpod-conmon-25333f73a448ce83bf57d3e316fc3cf4c8ff2cee248ee85eed001804948da76c.scope.
Jan 31 05:21:22 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:21:22 np0005603787 podman[244435]: 2026-01-31 10:21:22.849654162 +0000 UTC m=+0.026085761 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:21:22 np0005603787 podman[244435]: 2026-01-31 10:21:22.947458512 +0000 UTC m=+0.123890131 container init 25333f73a448ce83bf57d3e316fc3cf4c8ff2cee248ee85eed001804948da76c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 05:21:22 np0005603787 podman[244435]: 2026-01-31 10:21:22.954235937 +0000 UTC m=+0.130667526 container start 25333f73a448ce83bf57d3e316fc3cf4c8ff2cee248ee85eed001804948da76c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_wilson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:21:22 np0005603787 busy_wilson[244451]: 167 167
Jan 31 05:21:22 np0005603787 systemd[1]: libpod-25333f73a448ce83bf57d3e316fc3cf4c8ff2cee248ee85eed001804948da76c.scope: Deactivated successfully.
Jan 31 05:21:22 np0005603787 podman[244435]: 2026-01-31 10:21:22.961048342 +0000 UTC m=+0.137479931 container attach 25333f73a448ce83bf57d3e316fc3cf4c8ff2cee248ee85eed001804948da76c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_wilson, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:21:22 np0005603787 podman[244435]: 2026-01-31 10:21:22.96204509 +0000 UTC m=+0.138476679 container died 25333f73a448ce83bf57d3e316fc3cf4c8ff2cee248ee85eed001804948da76c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:21:22 np0005603787 systemd[1]: var-lib-containers-storage-overlay-1dcb535cf15879892ec731324865e8ec4250e11fd536015cc52dee793dc53db4-merged.mount: Deactivated successfully.
Jan 31 05:21:23 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:21:23 np0005603787 podman[244435]: 2026-01-31 10:21:23.036122546 +0000 UTC m=+0.212554125 container remove 25333f73a448ce83bf57d3e316fc3cf4c8ff2cee248ee85eed001804948da76c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:21:23 np0005603787 systemd[1]: libpod-conmon-25333f73a448ce83bf57d3e316fc3cf4c8ff2cee248ee85eed001804948da76c.scope: Deactivated successfully.
Jan 31 05:21:23 np0005603787 podman[244475]: 2026-01-31 10:21:23.144795062 +0000 UTC m=+0.021796294 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:21:23 np0005603787 podman[244475]: 2026-01-31 10:21:23.252580685 +0000 UTC m=+0.129581887 container create 1c28d7537095decc74adf517f7268e0ae61103fd67b542e17ffd5f948e987197 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:21:23 np0005603787 systemd[1]: Started libpod-conmon-1c28d7537095decc74adf517f7268e0ae61103fd67b542e17ffd5f948e987197.scope.
Jan 31 05:21:23 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:21:23 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e5e89d8f347b9aa7b2ca10055897e8de0c8e5fce0e9851b92ff07bb74e0dd62/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:21:23 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e5e89d8f347b9aa7b2ca10055897e8de0c8e5fce0e9851b92ff07bb74e0dd62/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:21:23 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e5e89d8f347b9aa7b2ca10055897e8de0c8e5fce0e9851b92ff07bb74e0dd62/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:21:23 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e5e89d8f347b9aa7b2ca10055897e8de0c8e5fce0e9851b92ff07bb74e0dd62/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:21:23 np0005603787 podman[244475]: 2026-01-31 10:21:23.331003569 +0000 UTC m=+0.208004801 container init 1c28d7537095decc74adf517f7268e0ae61103fd67b542e17ffd5f948e987197 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_roentgen, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:21:23 np0005603787 podman[244475]: 2026-01-31 10:21:23.337965538 +0000 UTC m=+0.214966730 container start 1c28d7537095decc74adf517f7268e0ae61103fd67b542e17ffd5f948e987197 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_roentgen, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 05:21:23 np0005603787 podman[244475]: 2026-01-31 10:21:23.341633258 +0000 UTC m=+0.218634490 container attach 1c28d7537095decc74adf517f7268e0ae61103fd67b542e17ffd5f948e987197 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 05:21:23 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:21:24 np0005603787 lvm[244570]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:21:24 np0005603787 lvm[244570]: VG ceph_vg1 finished
Jan 31 05:21:24 np0005603787 lvm[244569]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:21:24 np0005603787 lvm[244569]: VG ceph_vg0 finished
Jan 31 05:21:24 np0005603787 lvm[244572]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:21:24 np0005603787 lvm[244572]: VG ceph_vg2 finished
Jan 31 05:21:24 np0005603787 agitated_roentgen[244491]: {}
Jan 31 05:21:24 np0005603787 systemd[1]: libpod-1c28d7537095decc74adf517f7268e0ae61103fd67b542e17ffd5f948e987197.scope: Deactivated successfully.
Jan 31 05:21:24 np0005603787 systemd[1]: libpod-1c28d7537095decc74adf517f7268e0ae61103fd67b542e17ffd5f948e987197.scope: Consumed 1.144s CPU time.
Jan 31 05:21:24 np0005603787 podman[244475]: 2026-01-31 10:21:24.146707304 +0000 UTC m=+1.023708546 container died 1c28d7537095decc74adf517f7268e0ae61103fd67b542e17ffd5f948e987197 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_roentgen, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 05:21:24 np0005603787 systemd[1]: var-lib-containers-storage-overlay-2e5e89d8f347b9aa7b2ca10055897e8de0c8e5fce0e9851b92ff07bb74e0dd62-merged.mount: Deactivated successfully.
Jan 31 05:21:24 np0005603787 podman[244475]: 2026-01-31 10:21:24.18332674 +0000 UTC m=+1.060327942 container remove 1c28d7537095decc74adf517f7268e0ae61103fd67b542e17ffd5f948e987197 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 05:21:24 np0005603787 systemd[1]: libpod-conmon-1c28d7537095decc74adf517f7268e0ae61103fd67b542e17ffd5f948e987197.scope: Deactivated successfully.
Jan 31 05:21:24 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:21:24 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:21:24 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:21:24 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:21:25 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:21:25 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:21:25 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:21:25 np0005603787 podman[244612]: 2026-01-31 10:21:25.838816855 +0000 UTC m=+0.050090373 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 05:21:25 np0005603787 podman[244611]: 2026-01-31 10:21:25.870763715 +0000 UTC m=+0.085192429 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 05:21:27 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:21:28 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:21:29 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:21:31 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:21:33 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:21:33 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:21:35 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:21:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:21:37.062 154765 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:21:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:21:37.063 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:21:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:21:37.064 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:21:37 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:21:38 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:21:39 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:21:41 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:21:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:21:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_10:21:43
Jan 31 05:21:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:21:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] do_upmap
Jan 31 05:21:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] pools ['volumes', '.rgw.root', 'images', 'default.rgw.control', 'backups', 'default.rgw.log', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta']
Jan 31 05:21:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:21:43 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:21:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:21:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:21:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:21:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:21:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:21:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:21:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:21:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:21:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:21:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:21:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:21:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:21:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:21:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:21:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:21:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:21:45 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:21:47 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:21:48 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:21:49 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:21:51 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:21:53 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:21:53 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:21:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:21:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:21:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:21:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:21:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:21:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:21:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:21:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:21:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:21:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:21:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.8805523531010136e-07 of space, bias 1.0, pg target 5.641657059303041e-05 quantized to 32 (current 32)
Jan 31 05:21:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:21:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1278967097563662e-06 of space, bias 4.0, pg target 0.0013534760517076394 quantized to 16 (current 16)
Jan 31 05:21:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:21:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:21:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:21:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:21:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:21:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 05:21:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:21:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:21:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:21:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:21:55 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:21:56 np0005603787 podman[244658]: 2026-01-31 10:21:56.834308655 +0000 UTC m=+0.052537311 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 05:21:56 np0005603787 podman[244657]: 2026-01-31 10:21:56.92084162 +0000 UTC m=+0.140501215 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:21:57 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:21:58 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:21:59 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:22:00 np0005603787 nova_compute[238603]: 2026-01-31 10:22:00.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:22:01 np0005603787 nova_compute[238603]: 2026-01-31 10:22:01.104 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:22:01 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:22:03 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:22:03 np0005603787 nova_compute[238603]: 2026-01-31 10:22:03.099 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:22:03 np0005603787 nova_compute[238603]: 2026-01-31 10:22:03.113 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:22:03 np0005603787 nova_compute[238603]: 2026-01-31 10:22:03.114 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 05:22:03 np0005603787 nova_compute[238603]: 2026-01-31 10:22:03.114 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 05:22:03 np0005603787 nova_compute[238603]: 2026-01-31 10:22:03.137 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 05:22:03 np0005603787 nova_compute[238603]: 2026-01-31 10:22:03.138 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:22:03 np0005603787 nova_compute[238603]: 2026-01-31 10:22:03.176 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:22:03 np0005603787 nova_compute[238603]: 2026-01-31 10:22:03.176 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:22:03 np0005603787 nova_compute[238603]: 2026-01-31 10:22:03.176 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:22:03 np0005603787 nova_compute[238603]: 2026-01-31 10:22:03.176 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 05:22:03 np0005603787 nova_compute[238603]: 2026-01-31 10:22:03.177 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:22:03 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:22:03 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:22:03 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2509961957' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:22:03 np0005603787 nova_compute[238603]: 2026-01-31 10:22:03.722 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:22:03 np0005603787 nova_compute[238603]: 2026-01-31 10:22:03.852 238607 WARNING nova.virt.libvirt.driver [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 05:22:03 np0005603787 nova_compute[238603]: 2026-01-31 10:22:03.853 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5143MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 05:22:03 np0005603787 nova_compute[238603]: 2026-01-31 10:22:03.853 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:22:03 np0005603787 nova_compute[238603]: 2026-01-31 10:22:03.853 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:22:03 np0005603787 nova_compute[238603]: 2026-01-31 10:22:03.915 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 05:22:03 np0005603787 nova_compute[238603]: 2026-01-31 10:22:03.916 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 05:22:03 np0005603787 nova_compute[238603]: 2026-01-31 10:22:03.933 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:22:04 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:22:04 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2444465646' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:22:04 np0005603787 nova_compute[238603]: 2026-01-31 10:22:04.488 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:22:04 np0005603787 nova_compute[238603]: 2026-01-31 10:22:04.492 238607 DEBUG nova.compute.provider_tree [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed in ProviderTree for provider: 207962d2-1ba9-4db2-8533-2a30e7131f3e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 05:22:04 np0005603787 nova_compute[238603]: 2026-01-31 10:22:04.512 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed for provider 207962d2-1ba9-4db2-8533-2a30e7131f3e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 05:22:04 np0005603787 nova_compute[238603]: 2026-01-31 10:22:04.513 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 05:22:04 np0005603787 nova_compute[238603]: 2026-01-31 10:22:04.514 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:22:05 np0005603787 nova_compute[238603]: 2026-01-31 10:22:05.479 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:22:05 np0005603787 nova_compute[238603]: 2026-01-31 10:22:05.479 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:22:05 np0005603787 nova_compute[238603]: 2026-01-31 10:22:05.479 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:22:05 np0005603787 nova_compute[238603]: 2026-01-31 10:22:05.480 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:22:05 np0005603787 nova_compute[238603]: 2026-01-31 10:22:05.480 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 05:22:05 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:22:06 np0005603787 nova_compute[238603]: 2026-01-31 10:22:06.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:22:07 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:22:08 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:22:09 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v857: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:22:11 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:22:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:22:13 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:22:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:22:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:22:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:22:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:22:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:22:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:22:15 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:22:17 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:22:18 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:22:19 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:22:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 05:22:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1106026146' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 05:22:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 05:22:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1106026146' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 05:22:21 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v863: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:22:23 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:22:23 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:22:24 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:22:24 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:22:24 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:22:24 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:22:24 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:22:24 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:22:24 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:22:24 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:22:24 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:22:24 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:22:24 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:22:24 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:22:25 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:22:25 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:22:25 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:22:25 np0005603787 podman[244889]: 2026-01-31 10:22:25.196579182 +0000 UTC m=+0.035994760 container create 914b850ba73e1bce723132d209e92f95ddd91916a33bcd540c3ea070775879a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_einstein, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Jan 31 05:22:25 np0005603787 systemd[1]: Started libpod-conmon-914b850ba73e1bce723132d209e92f95ddd91916a33bcd540c3ea070775879a8.scope.
Jan 31 05:22:25 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:22:25 np0005603787 podman[244889]: 2026-01-31 10:22:25.266660859 +0000 UTC m=+0.106076457 container init 914b850ba73e1bce723132d209e92f95ddd91916a33bcd540c3ea070775879a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_einstein, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:22:25 np0005603787 podman[244889]: 2026-01-31 10:22:25.178908512 +0000 UTC m=+0.018324130 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:22:25 np0005603787 podman[244889]: 2026-01-31 10:22:25.275843179 +0000 UTC m=+0.115258797 container start 914b850ba73e1bce723132d209e92f95ddd91916a33bcd540c3ea070775879a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 05:22:25 np0005603787 podman[244889]: 2026-01-31 10:22:25.27990147 +0000 UTC m=+0.119317058 container attach 914b850ba73e1bce723132d209e92f95ddd91916a33bcd540c3ea070775879a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_einstein, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Jan 31 05:22:25 np0005603787 friendly_einstein[244905]: 167 167
Jan 31 05:22:25 np0005603787 systemd[1]: libpod-914b850ba73e1bce723132d209e92f95ddd91916a33bcd540c3ea070775879a8.scope: Deactivated successfully.
Jan 31 05:22:25 np0005603787 conmon[244905]: conmon 914b850ba73e1bce7231 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-914b850ba73e1bce723132d209e92f95ddd91916a33bcd540c3ea070775879a8.scope/container/memory.events
Jan 31 05:22:25 np0005603787 podman[244889]: 2026-01-31 10:22:25.283457476 +0000 UTC m=+0.122873054 container died 914b850ba73e1bce723132d209e92f95ddd91916a33bcd540c3ea070775879a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_einstein, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 05:22:25 np0005603787 systemd[1]: var-lib-containers-storage-overlay-484aa8f387566916456a6118afbf324b20d2ab3856a42fdbb781a934db561619-merged.mount: Deactivated successfully.
Jan 31 05:22:25 np0005603787 podman[244889]: 2026-01-31 10:22:25.32180643 +0000 UTC m=+0.161222018 container remove 914b850ba73e1bce723132d209e92f95ddd91916a33bcd540c3ea070775879a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_einstein, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:22:25 np0005603787 systemd[1]: libpod-conmon-914b850ba73e1bce723132d209e92f95ddd91916a33bcd540c3ea070775879a8.scope: Deactivated successfully.
Jan 31 05:22:25 np0005603787 podman[244929]: 2026-01-31 10:22:25.482731989 +0000 UTC m=+0.043079543 container create a50defb8b820b5c28c8e6dc4592de705bdfc64fa9a00e4d448da925f10d287cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_keldysh, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2)
Jan 31 05:22:25 np0005603787 systemd[1]: Started libpod-conmon-a50defb8b820b5c28c8e6dc4592de705bdfc64fa9a00e4d448da925f10d287cb.scope.
Jan 31 05:22:25 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:22:25 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef80070b694c8f9e5b942975b7548c737c1f700c34695f3741668019a0677356/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:22:25 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef80070b694c8f9e5b942975b7548c737c1f700c34695f3741668019a0677356/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:22:25 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef80070b694c8f9e5b942975b7548c737c1f700c34695f3741668019a0677356/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:22:25 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef80070b694c8f9e5b942975b7548c737c1f700c34695f3741668019a0677356/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:22:25 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef80070b694c8f9e5b942975b7548c737c1f700c34695f3741668019a0677356/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:22:25 np0005603787 podman[244929]: 2026-01-31 10:22:25.46181267 +0000 UTC m=+0.022160224 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:22:25 np0005603787 podman[244929]: 2026-01-31 10:22:25.561761529 +0000 UTC m=+0.122109133 container init a50defb8b820b5c28c8e6dc4592de705bdfc64fa9a00e4d448da925f10d287cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_keldysh, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 05:22:25 np0005603787 podman[244929]: 2026-01-31 10:22:25.574719921 +0000 UTC m=+0.135067475 container start a50defb8b820b5c28c8e6dc4592de705bdfc64fa9a00e4d448da925f10d287cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_keldysh, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:22:25 np0005603787 podman[244929]: 2026-01-31 10:22:25.580450938 +0000 UTC m=+0.140798542 container attach a50defb8b820b5c28c8e6dc4592de705bdfc64fa9a00e4d448da925f10d287cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_keldysh, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 05:22:25 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:22:26 np0005603787 cranky_keldysh[244946]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:22:26 np0005603787 cranky_keldysh[244946]: --> All data devices are unavailable
Jan 31 05:22:26 np0005603787 systemd[1]: libpod-a50defb8b820b5c28c8e6dc4592de705bdfc64fa9a00e4d448da925f10d287cb.scope: Deactivated successfully.
Jan 31 05:22:26 np0005603787 podman[244929]: 2026-01-31 10:22:26.030927025 +0000 UTC m=+0.591274539 container died a50defb8b820b5c28c8e6dc4592de705bdfc64fa9a00e4d448da925f10d287cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_keldysh, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:22:26 np0005603787 systemd[1]: var-lib-containers-storage-overlay-ef80070b694c8f9e5b942975b7548c737c1f700c34695f3741668019a0677356-merged.mount: Deactivated successfully.
Jan 31 05:22:26 np0005603787 podman[244929]: 2026-01-31 10:22:26.074344086 +0000 UTC m=+0.634691600 container remove a50defb8b820b5c28c8e6dc4592de705bdfc64fa9a00e4d448da925f10d287cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_keldysh, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 05:22:26 np0005603787 systemd[1]: libpod-conmon-a50defb8b820b5c28c8e6dc4592de705bdfc64fa9a00e4d448da925f10d287cb.scope: Deactivated successfully.
Jan 31 05:22:26 np0005603787 podman[245043]: 2026-01-31 10:22:26.489995376 +0000 UTC m=+0.039709401 container create f678baf5e0292e08ea7e4cc0c40374fad9bc18a42dd785245f76f53c556a0935 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_wing, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 05:22:26 np0005603787 systemd[1]: Started libpod-conmon-f678baf5e0292e08ea7e4cc0c40374fad9bc18a42dd785245f76f53c556a0935.scope.
Jan 31 05:22:26 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:22:26 np0005603787 podman[245043]: 2026-01-31 10:22:26.469587552 +0000 UTC m=+0.019301577 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:22:26 np0005603787 podman[245043]: 2026-01-31 10:22:26.570006563 +0000 UTC m=+0.119720568 container init f678baf5e0292e08ea7e4cc0c40374fad9bc18a42dd785245f76f53c556a0935 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle)
Jan 31 05:22:26 np0005603787 podman[245043]: 2026-01-31 10:22:26.577836757 +0000 UTC m=+0.127550742 container start f678baf5e0292e08ea7e4cc0c40374fad9bc18a42dd785245f76f53c556a0935 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_wing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 05:22:26 np0005603787 podman[245043]: 2026-01-31 10:22:26.58053528 +0000 UTC m=+0.130249375 container attach f678baf5e0292e08ea7e4cc0c40374fad9bc18a42dd785245f76f53c556a0935 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:22:26 np0005603787 trusting_wing[245059]: 167 167
Jan 31 05:22:26 np0005603787 systemd[1]: libpod-f678baf5e0292e08ea7e4cc0c40374fad9bc18a42dd785245f76f53c556a0935.scope: Deactivated successfully.
Jan 31 05:22:26 np0005603787 conmon[245059]: conmon f678baf5e0292e08ea7e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f678baf5e0292e08ea7e4cc0c40374fad9bc18a42dd785245f76f53c556a0935.scope/container/memory.events
Jan 31 05:22:26 np0005603787 podman[245043]: 2026-01-31 10:22:26.58309026 +0000 UTC m=+0.132804255 container died f678baf5e0292e08ea7e4cc0c40374fad9bc18a42dd785245f76f53c556a0935 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_wing, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:22:26 np0005603787 systemd[1]: var-lib-containers-storage-overlay-d8e87806717b5865d954b1b3b432aa362b7585b249051ffd5d2d81137c50c495-merged.mount: Deactivated successfully.
Jan 31 05:22:26 np0005603787 podman[245043]: 2026-01-31 10:22:26.617208878 +0000 UTC m=+0.166922853 container remove f678baf5e0292e08ea7e4cc0c40374fad9bc18a42dd785245f76f53c556a0935 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_wing, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 05:22:26 np0005603787 systemd[1]: libpod-conmon-f678baf5e0292e08ea7e4cc0c40374fad9bc18a42dd785245f76f53c556a0935.scope: Deactivated successfully.
Jan 31 05:22:26 np0005603787 podman[245084]: 2026-01-31 10:22:26.752944922 +0000 UTC m=+0.045631914 container create 03fd08e9c5b57a7ffc9588dea5b27cce956a52f049b69bac714d3e91b4423eb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_blackwell, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 05:22:26 np0005603787 systemd[1]: Started libpod-conmon-03fd08e9c5b57a7ffc9588dea5b27cce956a52f049b69bac714d3e91b4423eb9.scope.
Jan 31 05:22:26 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:22:26 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b49e8e91966b6070a35f9240538cb9ad3f8af81235da60350fccac1cf7d4d6fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:22:26 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b49e8e91966b6070a35f9240538cb9ad3f8af81235da60350fccac1cf7d4d6fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:22:26 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b49e8e91966b6070a35f9240538cb9ad3f8af81235da60350fccac1cf7d4d6fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:22:26 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b49e8e91966b6070a35f9240538cb9ad3f8af81235da60350fccac1cf7d4d6fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:22:26 np0005603787 podman[245084]: 2026-01-31 10:22:26.734833879 +0000 UTC m=+0.027520911 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:22:26 np0005603787 podman[245084]: 2026-01-31 10:22:26.839330843 +0000 UTC m=+0.132017865 container init 03fd08e9c5b57a7ffc9588dea5b27cce956a52f049b69bac714d3e91b4423eb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 05:22:26 np0005603787 podman[245084]: 2026-01-31 10:22:26.846427806 +0000 UTC m=+0.139114798 container start 03fd08e9c5b57a7ffc9588dea5b27cce956a52f049b69bac714d3e91b4423eb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_blackwell, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:22:26 np0005603787 podman[245084]: 2026-01-31 10:22:26.849793117 +0000 UTC m=+0.142480149 container attach 03fd08e9c5b57a7ffc9588dea5b27cce956a52f049b69bac714d3e91b4423eb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_blackwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]: {
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:    "0": [
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:        {
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:            "devices": [
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "/dev/loop3"
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:            ],
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:            "lv_name": "ceph_lv0",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:            "lv_size": "21470642176",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:            "name": "ceph_lv0",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:            "tags": {
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "ceph.cluster_name": "ceph",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "ceph.crush_device_class": "",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "ceph.encrypted": "0",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "ceph.objectstore": "bluestore",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "ceph.osd_id": "0",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "ceph.type": "block",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "ceph.vdo": "0",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "ceph.with_tpm": "0"
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:            },
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:            "type": "block",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:            "vg_name": "ceph_vg0"
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:        }
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:    ],
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:    "1": [
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:        {
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:            "devices": [
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "/dev/loop4"
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:            ],
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:            "lv_name": "ceph_lv1",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:            "lv_size": "21470642176",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:            "name": "ceph_lv1",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:            "tags": {
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "ceph.cluster_name": "ceph",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "ceph.crush_device_class": "",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "ceph.encrypted": "0",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "ceph.objectstore": "bluestore",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "ceph.osd_id": "1",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "ceph.type": "block",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "ceph.vdo": "0",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "ceph.with_tpm": "0"
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:            },
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:            "type": "block",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:            "vg_name": "ceph_vg1"
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:        }
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:    ],
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:    "2": [
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:        {
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:            "devices": [
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "/dev/loop5"
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:            ],
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:            "lv_name": "ceph_lv2",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:            "lv_size": "21470642176",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:            "name": "ceph_lv2",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:            "tags": {
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "ceph.cluster_name": "ceph",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "ceph.crush_device_class": "",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "ceph.encrypted": "0",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "ceph.objectstore": "bluestore",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "ceph.osd_id": "2",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "ceph.type": "block",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "ceph.vdo": "0",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:                "ceph.with_tpm": "0"
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:            },
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:            "type": "block",
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:            "vg_name": "ceph_vg2"
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:        }
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]:    ]
Jan 31 05:22:27 np0005603787 awesome_blackwell[245100]: }
Jan 31 05:22:27 np0005603787 systemd[1]: libpod-03fd08e9c5b57a7ffc9588dea5b27cce956a52f049b69bac714d3e91b4423eb9.scope: Deactivated successfully.
Jan 31 05:22:27 np0005603787 podman[245084]: 2026-01-31 10:22:27.148366711 +0000 UTC m=+0.441053703 container died 03fd08e9c5b57a7ffc9588dea5b27cce956a52f049b69bac714d3e91b4423eb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_blackwell, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 05:22:27 np0005603787 systemd[1]: var-lib-containers-storage-overlay-b49e8e91966b6070a35f9240538cb9ad3f8af81235da60350fccac1cf7d4d6fc-merged.mount: Deactivated successfully.
Jan 31 05:22:27 np0005603787 podman[245084]: 2026-01-31 10:22:27.191962738 +0000 UTC m=+0.484649730 container remove 03fd08e9c5b57a7ffc9588dea5b27cce956a52f049b69bac714d3e91b4423eb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:22:27 np0005603787 systemd[1]: libpod-conmon-03fd08e9c5b57a7ffc9588dea5b27cce956a52f049b69bac714d3e91b4423eb9.scope: Deactivated successfully.
Jan 31 05:22:27 np0005603787 podman[245117]: 2026-01-31 10:22:27.23172596 +0000 UTC m=+0.053991131 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent)
Jan 31 05:22:27 np0005603787 podman[245109]: 2026-01-31 10:22:27.278846832 +0000 UTC m=+0.102388198 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 05:22:27 np0005603787 podman[245222]: 2026-01-31 10:22:27.558629364 +0000 UTC m=+0.035135937 container create 0945eff773ad34e7541163722bd13e20fb7a9e5577a746916f7d0d2f1510918b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_heisenberg, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030)
Jan 31 05:22:27 np0005603787 systemd[1]: Started libpod-conmon-0945eff773ad34e7541163722bd13e20fb7a9e5577a746916f7d0d2f1510918b.scope.
Jan 31 05:22:27 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:22:27 np0005603787 podman[245222]: 2026-01-31 10:22:27.620854658 +0000 UTC m=+0.097361281 container init 0945eff773ad34e7541163722bd13e20fb7a9e5577a746916f7d0d2f1510918b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_heisenberg, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 05:22:27 np0005603787 podman[245222]: 2026-01-31 10:22:27.626494011 +0000 UTC m=+0.103000584 container start 0945eff773ad34e7541163722bd13e20fb7a9e5577a746916f7d0d2f1510918b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_heisenberg, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:22:27 np0005603787 podman[245222]: 2026-01-31 10:22:27.629984276 +0000 UTC m=+0.106490889 container attach 0945eff773ad34e7541163722bd13e20fb7a9e5577a746916f7d0d2f1510918b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_heisenberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True)
Jan 31 05:22:27 np0005603787 adoring_heisenberg[245238]: 167 167
Jan 31 05:22:27 np0005603787 systemd[1]: libpod-0945eff773ad34e7541163722bd13e20fb7a9e5577a746916f7d0d2f1510918b.scope: Deactivated successfully.
Jan 31 05:22:27 np0005603787 podman[245222]: 2026-01-31 10:22:27.6308557 +0000 UTC m=+0.107362273 container died 0945eff773ad34e7541163722bd13e20fb7a9e5577a746916f7d0d2f1510918b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 05:22:27 np0005603787 podman[245222]: 2026-01-31 10:22:27.54158073 +0000 UTC m=+0.018087353 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:22:27 np0005603787 systemd[1]: var-lib-containers-storage-overlay-6303b32f7752ed29e510d6903c6a759b1e8db6c38c0df42c0737c9b5958c63a7-merged.mount: Deactivated successfully.
Jan 31 05:22:27 np0005603787 podman[245222]: 2026-01-31 10:22:27.662495291 +0000 UTC m=+0.139001864 container remove 0945eff773ad34e7541163722bd13e20fb7a9e5577a746916f7d0d2f1510918b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:22:27 np0005603787 systemd[1]: libpod-conmon-0945eff773ad34e7541163722bd13e20fb7a9e5577a746916f7d0d2f1510918b.scope: Deactivated successfully.
Jan 31 05:22:27 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:22:27 np0005603787 podman[245262]: 2026-01-31 10:22:27.801244956 +0000 UTC m=+0.054732330 container create 57256a2464dde918f0db61b7a3689b652c0b5e45ceb400f816379c7730538022 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_knuth, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 05:22:27 np0005603787 systemd[1]: Started libpod-conmon-57256a2464dde918f0db61b7a3689b652c0b5e45ceb400f816379c7730538022.scope.
Jan 31 05:22:27 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:22:27 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68827a1f139ca665ec23bc57599ece12f9e70649e48971c451ecac6a6c0a32fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:22:27 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68827a1f139ca665ec23bc57599ece12f9e70649e48971c451ecac6a6c0a32fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:22:27 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68827a1f139ca665ec23bc57599ece12f9e70649e48971c451ecac6a6c0a32fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:22:27 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68827a1f139ca665ec23bc57599ece12f9e70649e48971c451ecac6a6c0a32fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:22:27 np0005603787 podman[245262]: 2026-01-31 10:22:27.778134647 +0000 UTC m=+0.031622081 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:22:27 np0005603787 podman[245262]: 2026-01-31 10:22:27.879715382 +0000 UTC m=+0.133202766 container init 57256a2464dde918f0db61b7a3689b652c0b5e45ceb400f816379c7730538022 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_knuth, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 05:22:27 np0005603787 podman[245262]: 2026-01-31 10:22:27.884152062 +0000 UTC m=+0.137639426 container start 57256a2464dde918f0db61b7a3689b652c0b5e45ceb400f816379c7730538022 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:22:27 np0005603787 podman[245262]: 2026-01-31 10:22:27.896224931 +0000 UTC m=+0.149712335 container attach 57256a2464dde918f0db61b7a3689b652c0b5e45ceb400f816379c7730538022 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_knuth, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Jan 31 05:22:28 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:22:28 np0005603787 lvm[245357]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:22:28 np0005603787 lvm[245357]: VG ceph_vg1 finished
Jan 31 05:22:28 np0005603787 lvm[245354]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:22:28 np0005603787 lvm[245354]: VG ceph_vg0 finished
Jan 31 05:22:28 np0005603787 lvm[245359]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:22:28 np0005603787 lvm[245359]: VG ceph_vg2 finished
Jan 31 05:22:28 np0005603787 lvm[245360]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:22:28 np0005603787 lvm[245360]: VG ceph_vg1 finished
Jan 31 05:22:28 np0005603787 heuristic_knuth[245278]: {}
Jan 31 05:22:28 np0005603787 systemd[1]: libpod-57256a2464dde918f0db61b7a3689b652c0b5e45ceb400f816379c7730538022.scope: Deactivated successfully.
Jan 31 05:22:28 np0005603787 podman[245262]: 2026-01-31 10:22:28.605215452 +0000 UTC m=+0.858702826 container died 57256a2464dde918f0db61b7a3689b652c0b5e45ceb400f816379c7730538022 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_knuth, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 05:22:28 np0005603787 systemd[1]: var-lib-containers-storage-overlay-68827a1f139ca665ec23bc57599ece12f9e70649e48971c451ecac6a6c0a32fc-merged.mount: Deactivated successfully.
Jan 31 05:22:28 np0005603787 podman[245262]: 2026-01-31 10:22:28.644336957 +0000 UTC m=+0.897824331 container remove 57256a2464dde918f0db61b7a3689b652c0b5e45ceb400f816379c7730538022 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:22:28 np0005603787 systemd[1]: libpod-conmon-57256a2464dde918f0db61b7a3689b652c0b5e45ceb400f816379c7730538022.scope: Deactivated successfully.
Jan 31 05:22:28 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:22:28 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:22:28 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:22:28 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:22:29 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:22:29 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:22:29 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:22:31 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:22:33 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:22:33 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v869: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:22:35 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:22:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:22:37.063 154765 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:22:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:22:37.064 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:22:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:22:37.064 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:22:37 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v871: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:22:38 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:22:39 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:22:41 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:22:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:22:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_10:22:43
Jan 31 05:22:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:22:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] do_upmap
Jan 31 05:22:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] pools ['vms', 'backups', 'cephfs.cephfs.meta', 'images', 'volumes', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr']
Jan 31 05:22:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:22:43 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v874: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:22:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:22:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:22:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:22:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:22:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:22:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:22:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:22:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:22:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:22:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:22:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:22:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:22:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:22:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:22:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:22:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:22:45 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v875: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:22:47 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:22:48 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:22:49 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:22:51 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v878: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:22:53 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:22:53 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:22:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:22:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:22:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:22:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:22:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:22:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:22:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:22:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:22:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:22:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:22:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.8805523531010136e-07 of space, bias 1.0, pg target 5.641657059303041e-05 quantized to 32 (current 32)
Jan 31 05:22:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:22:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1278967097563662e-06 of space, bias 4.0, pg target 0.0013534760517076394 quantized to 16 (current 16)
Jan 31 05:22:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:22:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:22:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:22:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:22:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:22:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 05:22:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:22:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:22:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:22:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:22:55 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v880: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:22:57 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:22:57 np0005603787 podman[245401]: 2026-01-31 10:22:57.877014429 +0000 UTC m=+0.089260486 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 31 05:22:57 np0005603787 podman[245400]: 2026-01-31 10:22:57.916875357 +0000 UTC m=+0.130517122 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller)
Jan 31 05:22:58 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:22:58 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Jan 31 05:22:58 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:22:58.333350) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 05:22:58 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Jan 31 05:22:58 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769854978333395, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2062, "num_deletes": 253, "total_data_size": 3506091, "memory_usage": 3559544, "flush_reason": "Manual Compaction"}
Jan 31 05:22:58 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Jan 31 05:22:58 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769854978356800, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 3429048, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16452, "largest_seqno": 18513, "table_properties": {"data_size": 3419688, "index_size": 5918, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18682, "raw_average_key_size": 19, "raw_value_size": 3400946, "raw_average_value_size": 3633, "num_data_blocks": 268, "num_entries": 936, "num_filter_entries": 936, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769854753, "oldest_key_time": 1769854753, "file_creation_time": 1769854978, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a9067dea-12fb-43c2-8d5c-dbf66227f0e8", "db_session_id": "EXKALWXQ4I64EWVUMKE5", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:22:58 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 23523 microseconds, and 8499 cpu microseconds.
Jan 31 05:22:58 np0005603787 ceph-mon[75160]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 05:22:58 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:22:58.356867) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 3429048 bytes OK
Jan 31 05:22:58 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:22:58.356891) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Jan 31 05:22:58 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:22:58.359502) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Jan 31 05:22:58 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:22:58.359525) EVENT_LOG_v1 {"time_micros": 1769854978359518, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 05:22:58 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:22:58.359549) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 05:22:58 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3497464, prev total WAL file size 3497464, number of live WAL files 2.
Jan 31 05:22:58 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:22:58 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:22:58.360431) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Jan 31 05:22:58 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 05:22:58 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(3348KB)], [38(7783KB)]
Jan 31 05:22:58 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769854978360465, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 11399846, "oldest_snapshot_seqno": -1}
Jan 31 05:22:58 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4500 keys, 9610346 bytes, temperature: kUnknown
Jan 31 05:22:58 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769854978408902, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 9610346, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9576427, "index_size": 21585, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11269, "raw_key_size": 108792, "raw_average_key_size": 24, "raw_value_size": 9491318, "raw_average_value_size": 2109, "num_data_blocks": 914, "num_entries": 4500, "num_filter_entries": 4500, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769853439, "oldest_key_time": 0, "file_creation_time": 1769854978, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a9067dea-12fb-43c2-8d5c-dbf66227f0e8", "db_session_id": "EXKALWXQ4I64EWVUMKE5", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:22:58 np0005603787 ceph-mon[75160]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 05:22:58 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:22:58.409185) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 9610346 bytes
Jan 31 05:22:58 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:22:58.410902) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 235.0 rd, 198.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.6 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 5021, records dropped: 521 output_compression: NoCompression
Jan 31 05:22:58 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:22:58.410924) EVENT_LOG_v1 {"time_micros": 1769854978410913, "job": 18, "event": "compaction_finished", "compaction_time_micros": 48512, "compaction_time_cpu_micros": 18495, "output_level": 6, "num_output_files": 1, "total_output_size": 9610346, "num_input_records": 5021, "num_output_records": 4500, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 05:22:58 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:22:58 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769854978411372, "job": 18, "event": "table_file_deletion", "file_number": 40}
Jan 31 05:22:58 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:22:58 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769854978411874, "job": 18, "event": "table_file_deletion", "file_number": 38}
Jan 31 05:22:58 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:22:58.360393) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:22:58 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:22:58.411900) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:22:58 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:22:58.411904) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:22:58 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:22:58.411905) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:22:58 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:22:58.411907) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:22:58 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:22:58.411909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:22:59 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v882: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:23:01 np0005603787 nova_compute[238603]: 2026-01-31 10:23:01.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:23:01 np0005603787 nova_compute[238603]: 2026-01-31 10:23:01.103 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:23:01 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v883: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:23:03 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:23:03 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:23:04 np0005603787 nova_compute[238603]: 2026-01-31 10:23:04.098 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:23:04 np0005603787 nova_compute[238603]: 2026-01-31 10:23:04.101 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:23:04 np0005603787 nova_compute[238603]: 2026-01-31 10:23:04.102 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 05:23:04 np0005603787 nova_compute[238603]: 2026-01-31 10:23:04.102 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 05:23:04 np0005603787 nova_compute[238603]: 2026-01-31 10:23:04.124 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 05:23:04 np0005603787 nova_compute[238603]: 2026-01-31 10:23:04.124 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:23:04 np0005603787 nova_compute[238603]: 2026-01-31 10:23:04.153 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:23:04 np0005603787 nova_compute[238603]: 2026-01-31 10:23:04.153 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:23:04 np0005603787 nova_compute[238603]: 2026-01-31 10:23:04.154 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:23:04 np0005603787 nova_compute[238603]: 2026-01-31 10:23:04.154 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 05:23:04 np0005603787 nova_compute[238603]: 2026-01-31 10:23:04.154 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:23:04 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:23:04 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1377304771' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:23:04 np0005603787 nova_compute[238603]: 2026-01-31 10:23:04.708 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:23:04 np0005603787 nova_compute[238603]: 2026-01-31 10:23:04.882 238607 WARNING nova.virt.libvirt.driver [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 05:23:04 np0005603787 nova_compute[238603]: 2026-01-31 10:23:04.883 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5097MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 05:23:04 np0005603787 nova_compute[238603]: 2026-01-31 10:23:04.883 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:23:04 np0005603787 nova_compute[238603]: 2026-01-31 10:23:04.884 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:23:04 np0005603787 nova_compute[238603]: 2026-01-31 10:23:04.961 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 05:23:04 np0005603787 nova_compute[238603]: 2026-01-31 10:23:04.961 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 05:23:04 np0005603787 nova_compute[238603]: 2026-01-31 10:23:04.984 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:23:05 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:23:05 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3014842847' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:23:05 np0005603787 nova_compute[238603]: 2026-01-31 10:23:05.576 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:23:05 np0005603787 nova_compute[238603]: 2026-01-31 10:23:05.580 238607 DEBUG nova.compute.provider_tree [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed in ProviderTree for provider: 207962d2-1ba9-4db2-8533-2a30e7131f3e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 05:23:05 np0005603787 nova_compute[238603]: 2026-01-31 10:23:05.599 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed for provider 207962d2-1ba9-4db2-8533-2a30e7131f3e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 05:23:05 np0005603787 nova_compute[238603]: 2026-01-31 10:23:05.600 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 05:23:05 np0005603787 nova_compute[238603]: 2026-01-31 10:23:05.600 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:23:05 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:23:06 np0005603787 nova_compute[238603]: 2026-01-31 10:23:06.580 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:23:06 np0005603787 nova_compute[238603]: 2026-01-31 10:23:06.581 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:23:06 np0005603787 nova_compute[238603]: 2026-01-31 10:23:06.581 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:23:06 np0005603787 nova_compute[238603]: 2026-01-31 10:23:06.581 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 05:23:07 np0005603787 nova_compute[238603]: 2026-01-31 10:23:07.104 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:23:07 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v886: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:23:08 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:23:09 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v887: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:23:11 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:23:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:23:13 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Jan 31 05:23:13 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:23:13.051890) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 05:23:13 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Jan 31 05:23:13 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769854993051956, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 361, "num_deletes": 250, "total_data_size": 212486, "memory_usage": 219040, "flush_reason": "Manual Compaction"}
Jan 31 05:23:13 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Jan 31 05:23:13 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769854993054884, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 197134, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18514, "largest_seqno": 18874, "table_properties": {"data_size": 194940, "index_size": 358, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5722, "raw_average_key_size": 19, "raw_value_size": 190602, "raw_average_value_size": 643, "num_data_blocks": 16, "num_entries": 296, "num_filter_entries": 296, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769854979, "oldest_key_time": 1769854979, "file_creation_time": 1769854993, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a9067dea-12fb-43c2-8d5c-dbf66227f0e8", "db_session_id": "EXKALWXQ4I64EWVUMKE5", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:23:13 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 3022 microseconds, and 1238 cpu microseconds.
Jan 31 05:23:13 np0005603787 ceph-mon[75160]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 05:23:13 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:23:13.054918) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 197134 bytes OK
Jan 31 05:23:13 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:23:13.054931) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Jan 31 05:23:13 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:23:13.056285) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Jan 31 05:23:13 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:23:13.056300) EVENT_LOG_v1 {"time_micros": 1769854993056296, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 05:23:13 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:23:13.056313) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 05:23:13 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 210093, prev total WAL file size 210093, number of live WAL files 2.
Jan 31 05:23:13 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:23:13 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:23:13.056726) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353030' seq:72057594037927935, type:22 .. '6D67727374617400373531' seq:0, type:0; will stop at (end)
Jan 31 05:23:13 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 05:23:13 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(192KB)], [41(9385KB)]
Jan 31 05:23:13 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769854993056789, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 9807480, "oldest_snapshot_seqno": -1}
Jan 31 05:23:13 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4290 keys, 6489237 bytes, temperature: kUnknown
Jan 31 05:23:13 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769854993096406, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6489237, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6461215, "index_size": 16201, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10757, "raw_key_size": 104848, "raw_average_key_size": 24, "raw_value_size": 6384223, "raw_average_value_size": 1488, "num_data_blocks": 680, "num_entries": 4290, "num_filter_entries": 4290, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769853439, "oldest_key_time": 0, "file_creation_time": 1769854993, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a9067dea-12fb-43c2-8d5c-dbf66227f0e8", "db_session_id": "EXKALWXQ4I64EWVUMKE5", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:23:13 np0005603787 ceph-mon[75160]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 05:23:13 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:23:13.096668) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6489237 bytes
Jan 31 05:23:13 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:23:13.098185) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 247.1 rd, 163.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 9.2 +0.0 blob) out(6.2 +0.0 blob), read-write-amplify(82.7) write-amplify(32.9) OK, records in: 4796, records dropped: 506 output_compression: NoCompression
Jan 31 05:23:13 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:23:13.098213) EVENT_LOG_v1 {"time_micros": 1769854993098200, "job": 20, "event": "compaction_finished", "compaction_time_micros": 39687, "compaction_time_cpu_micros": 13296, "output_level": 6, "num_output_files": 1, "total_output_size": 6489237, "num_input_records": 4796, "num_output_records": 4290, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 05:23:13 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:23:13 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769854993098380, "job": 20, "event": "table_file_deletion", "file_number": 43}
Jan 31 05:23:13 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:23:13 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769854993099549, "job": 20, "event": "table_file_deletion", "file_number": 41}
Jan 31 05:23:13 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:23:13.056667) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:23:13 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:23:13.099638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:23:13 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:23:13.099644) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:23:13 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:23:13.099646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:23:13 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:23:13.099647) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:23:13 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:23:13.099649) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:23:13 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v889: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:23:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:23:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:23:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:23:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:23:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:23:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:23:15 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:23:17 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:23:18 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:23:19 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:23:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 05:23:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2559249849' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 05:23:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 05:23:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2559249849' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 05:23:21 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:23:23 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:23:23 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:23:25 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v895: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:23:27 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:23:28 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:23:28 np0005603787 podman[245491]: 2026-01-31 10:23:28.824897672 +0000 UTC m=+0.046989113 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 05:23:28 np0005603787 podman[245490]: 2026-01-31 10:23:28.852276649 +0000 UTC m=+0.074124633 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Jan 31 05:23:29 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:23:29 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:23:29 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:23:29 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:23:29 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:23:29 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:23:29 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:23:29 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:23:29 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:23:29 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:23:29 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:23:29 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:23:29 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:23:29 np0005603787 podman[245674]: 2026-01-31 10:23:29.761612544 +0000 UTC m=+0.100670547 container create 57361c7b593883f0db038793458e0c10f295debbc25b3704e583e8ec19b409ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 05:23:29 np0005603787 podman[245674]: 2026-01-31 10:23:29.691684986 +0000 UTC m=+0.030743039 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:23:29 np0005603787 systemd[1]: Started libpod-conmon-57361c7b593883f0db038793458e0c10f295debbc25b3704e583e8ec19b409ba.scope.
Jan 31 05:23:29 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:23:29 np0005603787 podman[245674]: 2026-01-31 10:23:29.854257841 +0000 UTC m=+0.193315854 container init 57361c7b593883f0db038793458e0c10f295debbc25b3704e583e8ec19b409ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_ellis, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 05:23:29 np0005603787 podman[245674]: 2026-01-31 10:23:29.860062309 +0000 UTC m=+0.199120342 container start 57361c7b593883f0db038793458e0c10f295debbc25b3704e583e8ec19b409ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_ellis, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:23:29 np0005603787 nifty_ellis[245690]: 167 167
Jan 31 05:23:29 np0005603787 systemd[1]: libpod-57361c7b593883f0db038793458e0c10f295debbc25b3704e583e8ec19b409ba.scope: Deactivated successfully.
Jan 31 05:23:29 np0005603787 podman[245674]: 2026-01-31 10:23:29.867183633 +0000 UTC m=+0.206241686 container attach 57361c7b593883f0db038793458e0c10f295debbc25b3704e583e8ec19b409ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_ellis, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:23:29 np0005603787 podman[245674]: 2026-01-31 10:23:29.867973344 +0000 UTC m=+0.207031377 container died 57361c7b593883f0db038793458e0c10f295debbc25b3704e583e8ec19b409ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_ellis, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 05:23:29 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:23:29 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:23:29 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:23:29 np0005603787 systemd[1]: var-lib-containers-storage-overlay-32560d874e1e1867e36f3891bba0a9037c5364664102cdc5947128bb53078873-merged.mount: Deactivated successfully.
Jan 31 05:23:30 np0005603787 podman[245674]: 2026-01-31 10:23:30.144052876 +0000 UTC m=+0.483110889 container remove 57361c7b593883f0db038793458e0c10f295debbc25b3704e583e8ec19b409ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_ellis, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:23:30 np0005603787 systemd[1]: libpod-conmon-57361c7b593883f0db038793458e0c10f295debbc25b3704e583e8ec19b409ba.scope: Deactivated successfully.
Jan 31 05:23:30 np0005603787 podman[245716]: 2026-01-31 10:23:30.275847151 +0000 UTC m=+0.021277581 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:23:30 np0005603787 podman[245716]: 2026-01-31 10:23:30.383989601 +0000 UTC m=+0.129420021 container create 2fbce8066046993bab63f63d863301ec79ae92598474bb8501f5bc7b601ef54f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 05:23:30 np0005603787 systemd[1]: Started libpod-conmon-2fbce8066046993bab63f63d863301ec79ae92598474bb8501f5bc7b601ef54f.scope.
Jan 31 05:23:30 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:23:30 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97d771dfd0333b9b7e33b91cf56f29b4952742eae40df75befcb33b8ac19cfad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:23:30 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97d771dfd0333b9b7e33b91cf56f29b4952742eae40df75befcb33b8ac19cfad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:23:30 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97d771dfd0333b9b7e33b91cf56f29b4952742eae40df75befcb33b8ac19cfad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:23:30 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97d771dfd0333b9b7e33b91cf56f29b4952742eae40df75befcb33b8ac19cfad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:23:30 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97d771dfd0333b9b7e33b91cf56f29b4952742eae40df75befcb33b8ac19cfad/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:23:30 np0005603787 podman[245716]: 2026-01-31 10:23:30.581962881 +0000 UTC m=+0.327393361 container init 2fbce8066046993bab63f63d863301ec79ae92598474bb8501f5bc7b601ef54f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_lovelace, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 05:23:30 np0005603787 podman[245716]: 2026-01-31 10:23:30.59696264 +0000 UTC m=+0.342393070 container start 2fbce8066046993bab63f63d863301ec79ae92598474bb8501f5bc7b601ef54f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_lovelace, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:23:30 np0005603787 podman[245716]: 2026-01-31 10:23:30.762513166 +0000 UTC m=+0.507943606 container attach 2fbce8066046993bab63f63d863301ec79ae92598474bb8501f5bc7b601ef54f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:23:30 np0005603787 blissful_lovelace[245732]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:23:30 np0005603787 blissful_lovelace[245732]: --> All data devices are unavailable
Jan 31 05:23:31 np0005603787 systemd[1]: libpod-2fbce8066046993bab63f63d863301ec79ae92598474bb8501f5bc7b601ef54f.scope: Deactivated successfully.
Jan 31 05:23:31 np0005603787 podman[245716]: 2026-01-31 10:23:31.006470661 +0000 UTC m=+0.751901051 container died 2fbce8066046993bab63f63d863301ec79ae92598474bb8501f5bc7b601ef54f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_lovelace, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 05:23:31 np0005603787 systemd[1]: var-lib-containers-storage-overlay-97d771dfd0333b9b7e33b91cf56f29b4952742eae40df75befcb33b8ac19cfad-merged.mount: Deactivated successfully.
Jan 31 05:23:31 np0005603787 podman[245716]: 2026-01-31 10:23:31.16260441 +0000 UTC m=+0.908034810 container remove 2fbce8066046993bab63f63d863301ec79ae92598474bb8501f5bc7b601ef54f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_lovelace, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:23:31 np0005603787 systemd[1]: libpod-conmon-2fbce8066046993bab63f63d863301ec79ae92598474bb8501f5bc7b601ef54f.scope: Deactivated successfully.
Jan 31 05:23:31 np0005603787 podman[245826]: 2026-01-31 10:23:31.564863082 +0000 UTC m=+0.039285002 container create acfc0988308483023dbcf751030fc6a5141ddc5281f679e8e44416c62c4c930c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_dewdney, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:23:31 np0005603787 systemd[1]: Started libpod-conmon-acfc0988308483023dbcf751030fc6a5141ddc5281f679e8e44416c62c4c930c.scope.
Jan 31 05:23:31 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:23:31 np0005603787 podman[245826]: 2026-01-31 10:23:31.546821331 +0000 UTC m=+0.021243271 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:23:31 np0005603787 podman[245826]: 2026-01-31 10:23:31.646179521 +0000 UTC m=+0.120601491 container init acfc0988308483023dbcf751030fc6a5141ddc5281f679e8e44416c62c4c930c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 05:23:31 np0005603787 podman[245826]: 2026-01-31 10:23:31.65275461 +0000 UTC m=+0.127176550 container start acfc0988308483023dbcf751030fc6a5141ddc5281f679e8e44416c62c4c930c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_dewdney, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:23:31 np0005603787 gracious_dewdney[245842]: 167 167
Jan 31 05:23:31 np0005603787 systemd[1]: libpod-acfc0988308483023dbcf751030fc6a5141ddc5281f679e8e44416c62c4c930c.scope: Deactivated successfully.
Jan 31 05:23:31 np0005603787 podman[245826]: 2026-01-31 10:23:31.659802942 +0000 UTC m=+0.134224862 container attach acfc0988308483023dbcf751030fc6a5141ddc5281f679e8e44416c62c4c930c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 05:23:31 np0005603787 podman[245826]: 2026-01-31 10:23:31.660214334 +0000 UTC m=+0.134636264 container died acfc0988308483023dbcf751030fc6a5141ddc5281f679e8e44416c62c4c930c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_dewdney, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:23:31 np0005603787 systemd[1]: var-lib-containers-storage-overlay-6166eac186b5f65172f0e6f9d05251795e700e56ec70db939c1135188fdfe970-merged.mount: Deactivated successfully.
Jan 31 05:23:31 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:23:31 np0005603787 podman[245826]: 2026-01-31 10:23:31.709284003 +0000 UTC m=+0.183705933 container remove acfc0988308483023dbcf751030fc6a5141ddc5281f679e8e44416c62c4c930c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:23:31 np0005603787 systemd[1]: libpod-conmon-acfc0988308483023dbcf751030fc6a5141ddc5281f679e8e44416c62c4c930c.scope: Deactivated successfully.
Jan 31 05:23:31 np0005603787 podman[245869]: 2026-01-31 10:23:31.828640488 +0000 UTC m=+0.042573403 container create 83913f96ebf5b7eef4f4d891ff376185c97379daac155da36b42a92b078a7eb1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 05:23:31 np0005603787 systemd[1]: Started libpod-conmon-83913f96ebf5b7eef4f4d891ff376185c97379daac155da36b42a92b078a7eb1.scope.
Jan 31 05:23:31 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:23:31 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40049f0a1e642dbc0c038170fb8ccc19e5de5a5d51b5294884ed3a4ddbc42e1f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:23:31 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40049f0a1e642dbc0c038170fb8ccc19e5de5a5d51b5294884ed3a4ddbc42e1f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:23:31 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40049f0a1e642dbc0c038170fb8ccc19e5de5a5d51b5294884ed3a4ddbc42e1f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:23:31 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40049f0a1e642dbc0c038170fb8ccc19e5de5a5d51b5294884ed3a4ddbc42e1f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:23:31 np0005603787 podman[245869]: 2026-01-31 10:23:31.894057753 +0000 UTC m=+0.107990678 container init 83913f96ebf5b7eef4f4d891ff376185c97379daac155da36b42a92b078a7eb1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_williamson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:23:31 np0005603787 podman[245869]: 2026-01-31 10:23:31.899407038 +0000 UTC m=+0.113339953 container start 83913f96ebf5b7eef4f4d891ff376185c97379daac155da36b42a92b078a7eb1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 05:23:31 np0005603787 podman[245869]: 2026-01-31 10:23:31.807501142 +0000 UTC m=+0.021434087 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:23:31 np0005603787 podman[245869]: 2026-01-31 10:23:31.903193112 +0000 UTC m=+0.117126027 container attach 83913f96ebf5b7eef4f4d891ff376185c97379daac155da36b42a92b078a7eb1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_williamson, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 05:23:32 np0005603787 great_williamson[245885]: {
Jan 31 05:23:32 np0005603787 great_williamson[245885]:    "0": [
Jan 31 05:23:32 np0005603787 great_williamson[245885]:        {
Jan 31 05:23:32 np0005603787 great_williamson[245885]:            "devices": [
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "/dev/loop3"
Jan 31 05:23:32 np0005603787 great_williamson[245885]:            ],
Jan 31 05:23:32 np0005603787 great_williamson[245885]:            "lv_name": "ceph_lv0",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:            "lv_size": "21470642176",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:            "name": "ceph_lv0",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:            "tags": {
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "ceph.cluster_name": "ceph",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "ceph.crush_device_class": "",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "ceph.encrypted": "0",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "ceph.objectstore": "bluestore",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "ceph.osd_id": "0",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "ceph.type": "block",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "ceph.vdo": "0",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "ceph.with_tpm": "0"
Jan 31 05:23:32 np0005603787 great_williamson[245885]:            },
Jan 31 05:23:32 np0005603787 great_williamson[245885]:            "type": "block",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:            "vg_name": "ceph_vg0"
Jan 31 05:23:32 np0005603787 great_williamson[245885]:        }
Jan 31 05:23:32 np0005603787 great_williamson[245885]:    ],
Jan 31 05:23:32 np0005603787 great_williamson[245885]:    "1": [
Jan 31 05:23:32 np0005603787 great_williamson[245885]:        {
Jan 31 05:23:32 np0005603787 great_williamson[245885]:            "devices": [
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "/dev/loop4"
Jan 31 05:23:32 np0005603787 great_williamson[245885]:            ],
Jan 31 05:23:32 np0005603787 great_williamson[245885]:            "lv_name": "ceph_lv1",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:            "lv_size": "21470642176",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:            "name": "ceph_lv1",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:            "tags": {
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "ceph.cluster_name": "ceph",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "ceph.crush_device_class": "",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "ceph.encrypted": "0",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "ceph.objectstore": "bluestore",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "ceph.osd_id": "1",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "ceph.type": "block",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "ceph.vdo": "0",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "ceph.with_tpm": "0"
Jan 31 05:23:32 np0005603787 great_williamson[245885]:            },
Jan 31 05:23:32 np0005603787 great_williamson[245885]:            "type": "block",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:            "vg_name": "ceph_vg1"
Jan 31 05:23:32 np0005603787 great_williamson[245885]:        }
Jan 31 05:23:32 np0005603787 great_williamson[245885]:    ],
Jan 31 05:23:32 np0005603787 great_williamson[245885]:    "2": [
Jan 31 05:23:32 np0005603787 great_williamson[245885]:        {
Jan 31 05:23:32 np0005603787 great_williamson[245885]:            "devices": [
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "/dev/loop5"
Jan 31 05:23:32 np0005603787 great_williamson[245885]:            ],
Jan 31 05:23:32 np0005603787 great_williamson[245885]:            "lv_name": "ceph_lv2",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:            "lv_size": "21470642176",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:            "name": "ceph_lv2",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:            "tags": {
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "ceph.cluster_name": "ceph",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "ceph.crush_device_class": "",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "ceph.encrypted": "0",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "ceph.objectstore": "bluestore",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "ceph.osd_id": "2",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "ceph.type": "block",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "ceph.vdo": "0",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:                "ceph.with_tpm": "0"
Jan 31 05:23:32 np0005603787 great_williamson[245885]:            },
Jan 31 05:23:32 np0005603787 great_williamson[245885]:            "type": "block",
Jan 31 05:23:32 np0005603787 great_williamson[245885]:            "vg_name": "ceph_vg2"
Jan 31 05:23:32 np0005603787 great_williamson[245885]:        }
Jan 31 05:23:32 np0005603787 great_williamson[245885]:    ]
Jan 31 05:23:32 np0005603787 great_williamson[245885]: }
Jan 31 05:23:32 np0005603787 systemd[1]: libpod-83913f96ebf5b7eef4f4d891ff376185c97379daac155da36b42a92b078a7eb1.scope: Deactivated successfully.
Jan 31 05:23:32 np0005603787 podman[245869]: 2026-01-31 10:23:32.190322364 +0000 UTC m=+0.404255279 container died 83913f96ebf5b7eef4f4d891ff376185c97379daac155da36b42a92b078a7eb1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_williamson, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:23:32 np0005603787 systemd[1]: var-lib-containers-storage-overlay-40049f0a1e642dbc0c038170fb8ccc19e5de5a5d51b5294884ed3a4ddbc42e1f-merged.mount: Deactivated successfully.
Jan 31 05:23:32 np0005603787 podman[245869]: 2026-01-31 10:23:32.233963844 +0000 UTC m=+0.447896789 container remove 83913f96ebf5b7eef4f4d891ff376185c97379daac155da36b42a92b078a7eb1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:23:32 np0005603787 systemd[1]: libpod-conmon-83913f96ebf5b7eef4f4d891ff376185c97379daac155da36b42a92b078a7eb1.scope: Deactivated successfully.
Jan 31 05:23:32 np0005603787 podman[245969]: 2026-01-31 10:23:32.611351849 +0000 UTC m=+0.040025093 container create 17d53687147a3cfcecd8c7814a93227cdbfca99bb1eb0161a65704908864499d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 05:23:32 np0005603787 systemd[1]: Started libpod-conmon-17d53687147a3cfcecd8c7814a93227cdbfca99bb1eb0161a65704908864499d.scope.
Jan 31 05:23:32 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:23:32 np0005603787 podman[245969]: 2026-01-31 10:23:32.588877826 +0000 UTC m=+0.017551080 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:23:32 np0005603787 podman[245969]: 2026-01-31 10:23:32.68836288 +0000 UTC m=+0.117036154 container init 17d53687147a3cfcecd8c7814a93227cdbfca99bb1eb0161a65704908864499d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 05:23:32 np0005603787 podman[245969]: 2026-01-31 10:23:32.69571507 +0000 UTC m=+0.124388304 container start 17d53687147a3cfcecd8c7814a93227cdbfca99bb1eb0161a65704908864499d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 31 05:23:32 np0005603787 wonderful_engelbart[245985]: 167 167
Jan 31 05:23:32 np0005603787 systemd[1]: libpod-17d53687147a3cfcecd8c7814a93227cdbfca99bb1eb0161a65704908864499d.scope: Deactivated successfully.
Jan 31 05:23:32 np0005603787 podman[245969]: 2026-01-31 10:23:32.704105629 +0000 UTC m=+0.132778943 container attach 17d53687147a3cfcecd8c7814a93227cdbfca99bb1eb0161a65704908864499d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_engelbart, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 05:23:32 np0005603787 podman[245969]: 2026-01-31 10:23:32.705055085 +0000 UTC m=+0.133728339 container died 17d53687147a3cfcecd8c7814a93227cdbfca99bb1eb0161a65704908864499d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_engelbart, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 05:23:32 np0005603787 systemd[1]: var-lib-containers-storage-overlay-01fc45c36035c9fc4ef602c646bb706278a2ed81f8781597d4603800d1529aac-merged.mount: Deactivated successfully.
Jan 31 05:23:32 np0005603787 podman[245969]: 2026-01-31 10:23:32.772920286 +0000 UTC m=+0.201593510 container remove 17d53687147a3cfcecd8c7814a93227cdbfca99bb1eb0161a65704908864499d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_engelbart, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:23:32 np0005603787 systemd[1]: libpod-conmon-17d53687147a3cfcecd8c7814a93227cdbfca99bb1eb0161a65704908864499d.scope: Deactivated successfully.
Jan 31 05:23:32 np0005603787 podman[246009]: 2026-01-31 10:23:32.943447918 +0000 UTC m=+0.048343670 container create ed963b1f1e9b4661ab790785885628e5a864d4d0fefdc32240f145070308fe6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 05:23:32 np0005603787 systemd[1]: Started libpod-conmon-ed963b1f1e9b4661ab790785885628e5a864d4d0fefdc32240f145070308fe6f.scope.
Jan 31 05:23:33 np0005603787 podman[246009]: 2026-01-31 10:23:32.91677181 +0000 UTC m=+0.021667632 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:23:33 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:23:33 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1443bd07166bc447ad5ab86c0b07ee9b49173431837a5c311e444f8611fcde8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:23:33 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1443bd07166bc447ad5ab86c0b07ee9b49173431837a5c311e444f8611fcde8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:23:33 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1443bd07166bc447ad5ab86c0b07ee9b49173431837a5c311e444f8611fcde8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:23:33 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1443bd07166bc447ad5ab86c0b07ee9b49173431837a5c311e444f8611fcde8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:23:33 np0005603787 podman[246009]: 2026-01-31 10:23:33.046962031 +0000 UTC m=+0.151857803 container init ed963b1f1e9b4661ab790785885628e5a864d4d0fefdc32240f145070308fe6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_pare, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 05:23:33 np0005603787 podman[246009]: 2026-01-31 10:23:33.052627646 +0000 UTC m=+0.157523418 container start ed963b1f1e9b4661ab790785885628e5a864d4d0fefdc32240f145070308fe6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_pare, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 05:23:33 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:23:33 np0005603787 podman[246009]: 2026-01-31 10:23:33.062603238 +0000 UTC m=+0.167498990 container attach ed963b1f1e9b4661ab790785885628e5a864d4d0fefdc32240f145070308fe6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_pare, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 05:23:33 np0005603787 lvm[246105]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:23:33 np0005603787 lvm[246103]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:23:33 np0005603787 lvm[246103]: VG ceph_vg0 finished
Jan 31 05:23:33 np0005603787 lvm[246105]: VG ceph_vg1 finished
Jan 31 05:23:33 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:23:33 np0005603787 lvm[246107]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:23:33 np0005603787 lvm[246107]: VG ceph_vg2 finished
Jan 31 05:23:33 np0005603787 stoic_pare[246025]: {}
Jan 31 05:23:33 np0005603787 systemd[1]: libpod-ed963b1f1e9b4661ab790785885628e5a864d4d0fefdc32240f145070308fe6f.scope: Deactivated successfully.
Jan 31 05:23:33 np0005603787 systemd[1]: libpod-ed963b1f1e9b4661ab790785885628e5a864d4d0fefdc32240f145070308fe6f.scope: Consumed 1.064s CPU time.
Jan 31 05:23:33 np0005603787 podman[246009]: 2026-01-31 10:23:33.809651376 +0000 UTC m=+0.914547148 container died ed963b1f1e9b4661ab790785885628e5a864d4d0fefdc32240f145070308fe6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_pare, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 05:23:33 np0005603787 systemd[1]: var-lib-containers-storage-overlay-c1443bd07166bc447ad5ab86c0b07ee9b49173431837a5c311e444f8611fcde8-merged.mount: Deactivated successfully.
Jan 31 05:23:33 np0005603787 podman[246009]: 2026-01-31 10:23:33.858331213 +0000 UTC m=+0.963226945 container remove ed963b1f1e9b4661ab790785885628e5a864d4d0fefdc32240f145070308fe6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_pare, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 05:23:33 np0005603787 systemd[1]: libpod-conmon-ed963b1f1e9b4661ab790785885628e5a864d4d0fefdc32240f145070308fe6f.scope: Deactivated successfully.
Jan 31 05:23:33 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:23:33 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:23:33 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:23:33 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:23:34 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:23:34 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:23:35 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:23:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:23:37.064 154765 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:23:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:23:37.066 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:23:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:23:37.066 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:23:37 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:23:38 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:23:39 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:23:41 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:23:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:23:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_10:23:43
Jan 31 05:23:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:23:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] do_upmap
Jan 31 05:23:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'backups', 'vms', 'images', 'volumes', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'default.rgw.control']
Jan 31 05:23:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:23:43 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:23:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:23:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:23:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:23:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:23:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:23:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:23:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:23:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:23:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:23:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:23:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:23:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:23:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:23:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:23:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:23:43 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:23:45 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:23:47 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:23:48 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:23:49 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:23:51 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:23:53 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:23:53 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:23:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:23:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:23:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:23:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:23:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:23:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:23:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:23:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:23:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:23:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:23:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.8805523531010136e-07 of space, bias 1.0, pg target 5.641657059303041e-05 quantized to 32 (current 32)
Jan 31 05:23:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:23:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1278967097563662e-06 of space, bias 4.0, pg target 0.0013534760517076394 quantized to 16 (current 16)
Jan 31 05:23:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:23:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:23:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:23:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:23:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:23:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 05:23:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:23:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:23:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:23:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:23:55 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:23:57 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:23:58 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:23:59 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:23:59 np0005603787 podman[246150]: 2026-01-31 10:23:59.862264638 +0000 UTC m=+0.072967752 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 31 05:23:59 np0005603787 podman[246149]: 2026-01-31 10:23:59.898111915 +0000 UTC m=+0.108969863 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 05:24:00 np0005603787 nova_compute[238603]: 2026-01-31 10:24:00.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:24:01 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:24:03 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:24:03 np0005603787 nova_compute[238603]: 2026-01-31 10:24:03.121 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:24:03 np0005603787 nova_compute[238603]: 2026-01-31 10:24:03.122 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:24:03 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:24:05 np0005603787 nova_compute[238603]: 2026-01-31 10:24:05.098 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:24:05 np0005603787 nova_compute[238603]: 2026-01-31 10:24:05.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:24:05 np0005603787 nova_compute[238603]: 2026-01-31 10:24:05.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:24:05 np0005603787 nova_compute[238603]: 2026-01-31 10:24:05.102 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 05:24:05 np0005603787 nova_compute[238603]: 2026-01-31 10:24:05.103 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:24:05 np0005603787 nova_compute[238603]: 2026-01-31 10:24:05.134 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:24:05 np0005603787 nova_compute[238603]: 2026-01-31 10:24:05.134 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:24:05 np0005603787 nova_compute[238603]: 2026-01-31 10:24:05.135 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:24:05 np0005603787 nova_compute[238603]: 2026-01-31 10:24:05.135 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 05:24:05 np0005603787 nova_compute[238603]: 2026-01-31 10:24:05.136 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:24:05 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:24:05 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3833574240' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:24:05 np0005603787 nova_compute[238603]: 2026-01-31 10:24:05.690 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:24:05 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:24:05 np0005603787 nova_compute[238603]: 2026-01-31 10:24:05.874 238607 WARNING nova.virt.libvirt.driver [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 05:24:05 np0005603787 nova_compute[238603]: 2026-01-31 10:24:05.875 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5112MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 05:24:05 np0005603787 nova_compute[238603]: 2026-01-31 10:24:05.876 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:24:05 np0005603787 nova_compute[238603]: 2026-01-31 10:24:05.876 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:24:06 np0005603787 nova_compute[238603]: 2026-01-31 10:24:06.071 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 05:24:06 np0005603787 nova_compute[238603]: 2026-01-31 10:24:06.071 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 05:24:06 np0005603787 nova_compute[238603]: 2026-01-31 10:24:06.145 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Refreshing inventories for resource provider 207962d2-1ba9-4db2-8533-2a30e7131f3e _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 05:24:06 np0005603787 nova_compute[238603]: 2026-01-31 10:24:06.226 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Updating ProviderTree inventory for provider 207962d2-1ba9-4db2-8533-2a30e7131f3e from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 05:24:06 np0005603787 nova_compute[238603]: 2026-01-31 10:24:06.226 238607 DEBUG nova.compute.provider_tree [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Updating inventory in ProviderTree for provider 207962d2-1ba9-4db2-8533-2a30e7131f3e with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 05:24:06 np0005603787 nova_compute[238603]: 2026-01-31 10:24:06.256 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Refreshing aggregate associations for resource provider 207962d2-1ba9-4db2-8533-2a30e7131f3e, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 05:24:06 np0005603787 nova_compute[238603]: 2026-01-31 10:24:06.278 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Refreshing trait associations for resource provider 207962d2-1ba9-4db2-8533-2a30e7131f3e, traits: COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SVM,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AESNI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_FMA3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE41,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_RESCUE_BFV,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_MMX,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NODE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_SHA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 05:24:06 np0005603787 nova_compute[238603]: 2026-01-31 10:24:06.294 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:24:06 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:24:06 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3420151008' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:24:06 np0005603787 nova_compute[238603]: 2026-01-31 10:24:06.888 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:24:06 np0005603787 nova_compute[238603]: 2026-01-31 10:24:06.892 238607 DEBUG nova.compute.provider_tree [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed in ProviderTree for provider: 207962d2-1ba9-4db2-8533-2a30e7131f3e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 05:24:06 np0005603787 nova_compute[238603]: 2026-01-31 10:24:06.908 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed for provider 207962d2-1ba9-4db2-8533-2a30e7131f3e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 05:24:06 np0005603787 nova_compute[238603]: 2026-01-31 10:24:06.911 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 05:24:06 np0005603787 nova_compute[238603]: 2026-01-31 10:24:06.911 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:24:07 np0005603787 nova_compute[238603]: 2026-01-31 10:24:07.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:24:07 np0005603787 nova_compute[238603]: 2026-01-31 10:24:07.102 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 05:24:07 np0005603787 nova_compute[238603]: 2026-01-31 10:24:07.102 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 05:24:07 np0005603787 nova_compute[238603]: 2026-01-31 10:24:07.114 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 05:24:07 np0005603787 nova_compute[238603]: 2026-01-31 10:24:07.114 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:24:07 np0005603787 nova_compute[238603]: 2026-01-31 10:24:07.115 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:24:07 np0005603787 nova_compute[238603]: 2026-01-31 10:24:07.115 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 05:24:07 np0005603787 nova_compute[238603]: 2026-01-31 10:24:07.125 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 05:24:07 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:24:08 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:24:08 np0005603787 nova_compute[238603]: 2026-01-31 10:24:08.113 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:24:08 np0005603787 nova_compute[238603]: 2026-01-31 10:24:08.128 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:24:09 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:24:11 np0005603787 nova_compute[238603]: 2026-01-31 10:24:11.103 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:24:11 np0005603787 nova_compute[238603]: 2026-01-31 10:24:11.103 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 05:24:11 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:24:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:24:13 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:24:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:24:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:24:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:24:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:24:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:24:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:24:15 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:24:17 np0005603787 nova_compute[238603]: 2026-01-31 10:24:17.159 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:24:17 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:24:18 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:24:19 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:24:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 05:24:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1608218770' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 05:24:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 05:24:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1608218770' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 05:24:21 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:24:23 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:24:23 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Jan 31 05:24:23 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:24:23.068038) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 05:24:23 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Jan 31 05:24:23 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769855063068123, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 812, "num_deletes": 256, "total_data_size": 1057889, "memory_usage": 1084064, "flush_reason": "Manual Compaction"}
Jan 31 05:24:23 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Jan 31 05:24:23 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769855063079274, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 1048321, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18875, "largest_seqno": 19686, "table_properties": {"data_size": 1044232, "index_size": 1805, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 8663, "raw_average_key_size": 18, "raw_value_size": 1035996, "raw_average_value_size": 2190, "num_data_blocks": 82, "num_entries": 473, "num_filter_entries": 473, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769854993, "oldest_key_time": 1769854993, "file_creation_time": 1769855063, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a9067dea-12fb-43c2-8d5c-dbf66227f0e8", "db_session_id": "EXKALWXQ4I64EWVUMKE5", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:24:23 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 11310 microseconds, and 3948 cpu microseconds.
Jan 31 05:24:23 np0005603787 ceph-mon[75160]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 05:24:23 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:24:23.079343) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 1048321 bytes OK
Jan 31 05:24:23 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:24:23.079372) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Jan 31 05:24:23 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:24:23.081340) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Jan 31 05:24:23 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:24:23.081368) EVENT_LOG_v1 {"time_micros": 1769855063081358, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 05:24:23 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:24:23.081395) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 05:24:23 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 1053821, prev total WAL file size 1053821, number of live WAL files 2.
Jan 31 05:24:23 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:24:23 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:24:23.082252) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323530' seq:72057594037927935, type:22 .. '6C6F676D00353032' seq:0, type:0; will stop at (end)
Jan 31 05:24:23 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 05:24:23 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(1023KB)], [44(6337KB)]
Jan 31 05:24:23 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769855063082302, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 7537558, "oldest_snapshot_seqno": -1}
Jan 31 05:24:23 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4239 keys, 7408076 bytes, temperature: kUnknown
Jan 31 05:24:23 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769855063133942, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7408076, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7378881, "index_size": 17501, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10629, "raw_key_size": 104879, "raw_average_key_size": 24, "raw_value_size": 7301270, "raw_average_value_size": 1722, "num_data_blocks": 733, "num_entries": 4239, "num_filter_entries": 4239, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769853439, "oldest_key_time": 0, "file_creation_time": 1769855063, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a9067dea-12fb-43c2-8d5c-dbf66227f0e8", "db_session_id": "EXKALWXQ4I64EWVUMKE5", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:24:23 np0005603787 ceph-mon[75160]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 05:24:23 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:24:23.134248) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7408076 bytes
Jan 31 05:24:23 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:24:23.136025) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.8 rd, 143.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 6.2 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(14.3) write-amplify(7.1) OK, records in: 4763, records dropped: 524 output_compression: NoCompression
Jan 31 05:24:23 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:24:23.136045) EVENT_LOG_v1 {"time_micros": 1769855063136036, "job": 22, "event": "compaction_finished", "compaction_time_micros": 51708, "compaction_time_cpu_micros": 23998, "output_level": 6, "num_output_files": 1, "total_output_size": 7408076, "num_input_records": 4763, "num_output_records": 4239, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 05:24:23 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:24:23 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769855063136256, "job": 22, "event": "table_file_deletion", "file_number": 46}
Jan 31 05:24:23 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:24:23 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769855063136939, "job": 22, "event": "table_file_deletion", "file_number": 44}
Jan 31 05:24:23 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:24:23.082148) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:24:23 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:24:23.137159) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:24:23 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:24:23.137166) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:24:23 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:24:23.137169) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:24:23 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:24:23.137172) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:24:23 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:24:23.137175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:24:23 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:24:25 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:24:27 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:24:28 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:24:29 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:24:30 np0005603787 podman[246240]: 2026-01-31 10:24:30.861889283 +0000 UTC m=+0.078696688 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent)
Jan 31 05:24:30 np0005603787 podman[246239]: 2026-01-31 10:24:30.88195503 +0000 UTC m=+0.096014060 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller)
Jan 31 05:24:31 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:24:33 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:24:33 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:24:34 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:24:34 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:24:34 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:24:34 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:24:34 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:24:34 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:24:34 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:24:34 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:24:34 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:24:34 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:24:34 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:24:34 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:24:34 np0005603787 podman[246424]: 2026-01-31 10:24:34.927126284 +0000 UTC m=+0.054519478 container create 71ba16dcc37f1f03bfb92125bb3d6dd4d90c8bad5d9dd031e0ef570f78cc65d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_spence, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 05:24:34 np0005603787 systemd[1]: Started libpod-conmon-71ba16dcc37f1f03bfb92125bb3d6dd4d90c8bad5d9dd031e0ef570f78cc65d0.scope.
Jan 31 05:24:34 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:24:35 np0005603787 podman[246424]: 2026-01-31 10:24:34.906599334 +0000 UTC m=+0.033992568 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:24:35 np0005603787 podman[246424]: 2026-01-31 10:24:35.01571044 +0000 UTC m=+0.143103634 container init 71ba16dcc37f1f03bfb92125bb3d6dd4d90c8bad5d9dd031e0ef570f78cc65d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_spence, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:24:35 np0005603787 podman[246424]: 2026-01-31 10:24:35.025395374 +0000 UTC m=+0.152788608 container start 71ba16dcc37f1f03bfb92125bb3d6dd4d90c8bad5d9dd031e0ef570f78cc65d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:24:35 np0005603787 podman[246424]: 2026-01-31 10:24:35.029028983 +0000 UTC m=+0.156422217 container attach 71ba16dcc37f1f03bfb92125bb3d6dd4d90c8bad5d9dd031e0ef570f78cc65d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:24:35 np0005603787 condescending_spence[246440]: 167 167
Jan 31 05:24:35 np0005603787 systemd[1]: libpod-71ba16dcc37f1f03bfb92125bb3d6dd4d90c8bad5d9dd031e0ef570f78cc65d0.scope: Deactivated successfully.
Jan 31 05:24:35 np0005603787 conmon[246440]: conmon 71ba16dcc37f1f03bfb9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-71ba16dcc37f1f03bfb92125bb3d6dd4d90c8bad5d9dd031e0ef570f78cc65d0.scope/container/memory.events
Jan 31 05:24:35 np0005603787 podman[246424]: 2026-01-31 10:24:35.033979489 +0000 UTC m=+0.161372703 container died 71ba16dcc37f1f03bfb92125bb3d6dd4d90c8bad5d9dd031e0ef570f78cc65d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:24:35 np0005603787 systemd[1]: var-lib-containers-storage-overlay-8fbf464633f08ad58dde1bbd4d3535d8f88c0da9fb1e694fd75e934ca7ae6022-merged.mount: Deactivated successfully.
Jan 31 05:24:35 np0005603787 podman[246424]: 2026-01-31 10:24:35.078596766 +0000 UTC m=+0.205989960 container remove 71ba16dcc37f1f03bfb92125bb3d6dd4d90c8bad5d9dd031e0ef570f78cc65d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:24:35 np0005603787 systemd[1]: libpod-conmon-71ba16dcc37f1f03bfb92125bb3d6dd4d90c8bad5d9dd031e0ef570f78cc65d0.scope: Deactivated successfully.
Jan 31 05:24:35 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:24:35 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:24:35 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:24:35 np0005603787 podman[246467]: 2026-01-31 10:24:35.258632196 +0000 UTC m=+0.054278491 container create d1b8445193f6d161da91a6c0d892dbb12b8fce2f54963cdefdacb2bec57df9f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 05:24:35 np0005603787 systemd[1]: Started libpod-conmon-d1b8445193f6d161da91a6c0d892dbb12b8fce2f54963cdefdacb2bec57df9f8.scope.
Jan 31 05:24:35 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:24:35 np0005603787 podman[246467]: 2026-01-31 10:24:35.235175157 +0000 UTC m=+0.030821502 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:24:35 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc512f85f8118e9d90c3966cc525e453ac8b402a8590edaae2cc0c33b49268c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:24:35 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc512f85f8118e9d90c3966cc525e453ac8b402a8590edaae2cc0c33b49268c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:24:35 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc512f85f8118e9d90c3966cc525e453ac8b402a8590edaae2cc0c33b49268c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:24:35 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc512f85f8118e9d90c3966cc525e453ac8b402a8590edaae2cc0c33b49268c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:24:35 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc512f85f8118e9d90c3966cc525e453ac8b402a8590edaae2cc0c33b49268c1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:24:35 np0005603787 podman[246467]: 2026-01-31 10:24:35.355831188 +0000 UTC m=+0.151477463 container init d1b8445193f6d161da91a6c0d892dbb12b8fce2f54963cdefdacb2bec57df9f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:24:35 np0005603787 podman[246467]: 2026-01-31 10:24:35.365001768 +0000 UTC m=+0.160648053 container start d1b8445193f6d161da91a6c0d892dbb12b8fce2f54963cdefdacb2bec57df9f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_keldysh, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:24:35 np0005603787 podman[246467]: 2026-01-31 10:24:35.368920305 +0000 UTC m=+0.164566560 container attach d1b8445193f6d161da91a6c0d892dbb12b8fce2f54963cdefdacb2bec57df9f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 05:24:35 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:24:35 np0005603787 charming_keldysh[246483]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:24:35 np0005603787 charming_keldysh[246483]: --> All data devices are unavailable
Jan 31 05:24:35 np0005603787 systemd[1]: libpod-d1b8445193f6d161da91a6c0d892dbb12b8fce2f54963cdefdacb2bec57df9f8.scope: Deactivated successfully.
Jan 31 05:24:35 np0005603787 podman[246467]: 2026-01-31 10:24:35.821329206 +0000 UTC m=+0.616975501 container died d1b8445193f6d161da91a6c0d892dbb12b8fce2f54963cdefdacb2bec57df9f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_keldysh, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 05:24:35 np0005603787 systemd[1]: var-lib-containers-storage-overlay-cc512f85f8118e9d90c3966cc525e453ac8b402a8590edaae2cc0c33b49268c1-merged.mount: Deactivated successfully.
Jan 31 05:24:35 np0005603787 podman[246467]: 2026-01-31 10:24:35.86399509 +0000 UTC m=+0.659641375 container remove d1b8445193f6d161da91a6c0d892dbb12b8fce2f54963cdefdacb2bec57df9f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_keldysh, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 05:24:35 np0005603787 systemd[1]: libpod-conmon-d1b8445193f6d161da91a6c0d892dbb12b8fce2f54963cdefdacb2bec57df9f8.scope: Deactivated successfully.
Jan 31 05:24:36 np0005603787 podman[246576]: 2026-01-31 10:24:36.290877164 +0000 UTC m=+0.040485775 container create 9d619bc2dcabfacc728b67edfaf02ac63d721cc18e8420cc95bbcf28853bbf78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_pike, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:24:36 np0005603787 systemd[1]: Started libpod-conmon-9d619bc2dcabfacc728b67edfaf02ac63d721cc18e8420cc95bbcf28853bbf78.scope.
Jan 31 05:24:36 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:24:36 np0005603787 podman[246576]: 2026-01-31 10:24:36.366945299 +0000 UTC m=+0.116553900 container init 9d619bc2dcabfacc728b67edfaf02ac63d721cc18e8420cc95bbcf28853bbf78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_pike, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:24:36 np0005603787 podman[246576]: 2026-01-31 10:24:36.273027137 +0000 UTC m=+0.022635728 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:24:36 np0005603787 podman[246576]: 2026-01-31 10:24:36.376721726 +0000 UTC m=+0.126330317 container start 9d619bc2dcabfacc728b67edfaf02ac63d721cc18e8420cc95bbcf28853bbf78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_pike, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 05:24:36 np0005603787 podman[246576]: 2026-01-31 10:24:36.381142396 +0000 UTC m=+0.130751017 container attach 9d619bc2dcabfacc728b67edfaf02ac63d721cc18e8420cc95bbcf28853bbf78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_pike, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:24:36 np0005603787 adoring_pike[246592]: 167 167
Jan 31 05:24:36 np0005603787 systemd[1]: libpod-9d619bc2dcabfacc728b67edfaf02ac63d721cc18e8420cc95bbcf28853bbf78.scope: Deactivated successfully.
Jan 31 05:24:36 np0005603787 podman[246576]: 2026-01-31 10:24:36.383365727 +0000 UTC m=+0.132974308 container died 9d619bc2dcabfacc728b67edfaf02ac63d721cc18e8420cc95bbcf28853bbf78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_pike, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 05:24:36 np0005603787 systemd[1]: var-lib-containers-storage-overlay-fa23689636201fa050fcf14e44a9d3b68880842fa9d9258e77e103dbaa477250-merged.mount: Deactivated successfully.
Jan 31 05:24:36 np0005603787 podman[246576]: 2026-01-31 10:24:36.421563749 +0000 UTC m=+0.171172330 container remove 9d619bc2dcabfacc728b67edfaf02ac63d721cc18e8420cc95bbcf28853bbf78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_pike, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:24:36 np0005603787 systemd[1]: libpod-conmon-9d619bc2dcabfacc728b67edfaf02ac63d721cc18e8420cc95bbcf28853bbf78.scope: Deactivated successfully.
Jan 31 05:24:36 np0005603787 podman[246617]: 2026-01-31 10:24:36.604387815 +0000 UTC m=+0.060060098 container create 76532a051748535e538dc2d6c596baff755f4e8dfd06c4566a3a8f0c8054b111 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_allen, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 05:24:36 np0005603787 systemd[1]: Started libpod-conmon-76532a051748535e538dc2d6c596baff755f4e8dfd06c4566a3a8f0c8054b111.scope.
Jan 31 05:24:36 np0005603787 podman[246617]: 2026-01-31 10:24:36.581490201 +0000 UTC m=+0.037162504 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:24:36 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:24:36 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aa1a89c84c919681205fa2e86b3c1eb281e784928503184b4ee6a5b0d9b5eb5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:24:36 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aa1a89c84c919681205fa2e86b3c1eb281e784928503184b4ee6a5b0d9b5eb5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:24:36 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aa1a89c84c919681205fa2e86b3c1eb281e784928503184b4ee6a5b0d9b5eb5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:24:36 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aa1a89c84c919681205fa2e86b3c1eb281e784928503184b4ee6a5b0d9b5eb5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:24:36 np0005603787 podman[246617]: 2026-01-31 10:24:36.715874227 +0000 UTC m=+0.171546600 container init 76532a051748535e538dc2d6c596baff755f4e8dfd06c4566a3a8f0c8054b111 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_allen, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 05:24:36 np0005603787 podman[246617]: 2026-01-31 10:24:36.729942171 +0000 UTC m=+0.185614484 container start 76532a051748535e538dc2d6c596baff755f4e8dfd06c4566a3a8f0c8054b111 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 05:24:36 np0005603787 podman[246617]: 2026-01-31 10:24:36.733598511 +0000 UTC m=+0.189270824 container attach 76532a051748535e538dc2d6c596baff755f4e8dfd06c4566a3a8f0c8054b111 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_allen, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:24:36 np0005603787 great_allen[246633]: {
Jan 31 05:24:37 np0005603787 great_allen[246633]:    "0": [
Jan 31 05:24:37 np0005603787 great_allen[246633]:        {
Jan 31 05:24:37 np0005603787 great_allen[246633]:            "devices": [
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "/dev/loop3"
Jan 31 05:24:37 np0005603787 great_allen[246633]:            ],
Jan 31 05:24:37 np0005603787 great_allen[246633]:            "lv_name": "ceph_lv0",
Jan 31 05:24:37 np0005603787 great_allen[246633]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:24:37 np0005603787 great_allen[246633]:            "lv_size": "21470642176",
Jan 31 05:24:37 np0005603787 great_allen[246633]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:24:37 np0005603787 great_allen[246633]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:24:37 np0005603787 great_allen[246633]:            "name": "ceph_lv0",
Jan 31 05:24:37 np0005603787 great_allen[246633]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:24:37 np0005603787 great_allen[246633]:            "tags": {
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "ceph.cluster_name": "ceph",
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "ceph.crush_device_class": "",
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "ceph.encrypted": "0",
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "ceph.objectstore": "bluestore",
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "ceph.osd_id": "0",
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "ceph.type": "block",
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "ceph.vdo": "0",
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "ceph.with_tpm": "0"
Jan 31 05:24:37 np0005603787 great_allen[246633]:            },
Jan 31 05:24:37 np0005603787 great_allen[246633]:            "type": "block",
Jan 31 05:24:37 np0005603787 great_allen[246633]:            "vg_name": "ceph_vg0"
Jan 31 05:24:37 np0005603787 great_allen[246633]:        }
Jan 31 05:24:37 np0005603787 great_allen[246633]:    ],
Jan 31 05:24:37 np0005603787 great_allen[246633]:    "1": [
Jan 31 05:24:37 np0005603787 great_allen[246633]:        {
Jan 31 05:24:37 np0005603787 great_allen[246633]:            "devices": [
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "/dev/loop4"
Jan 31 05:24:37 np0005603787 great_allen[246633]:            ],
Jan 31 05:24:37 np0005603787 great_allen[246633]:            "lv_name": "ceph_lv1",
Jan 31 05:24:37 np0005603787 great_allen[246633]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:24:37 np0005603787 great_allen[246633]:            "lv_size": "21470642176",
Jan 31 05:24:37 np0005603787 great_allen[246633]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:24:37 np0005603787 great_allen[246633]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:24:37 np0005603787 great_allen[246633]:            "name": "ceph_lv1",
Jan 31 05:24:37 np0005603787 great_allen[246633]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:24:37 np0005603787 great_allen[246633]:            "tags": {
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "ceph.cluster_name": "ceph",
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "ceph.crush_device_class": "",
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "ceph.encrypted": "0",
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "ceph.objectstore": "bluestore",
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "ceph.osd_id": "1",
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "ceph.type": "block",
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "ceph.vdo": "0",
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "ceph.with_tpm": "0"
Jan 31 05:24:37 np0005603787 great_allen[246633]:            },
Jan 31 05:24:37 np0005603787 great_allen[246633]:            "type": "block",
Jan 31 05:24:37 np0005603787 great_allen[246633]:            "vg_name": "ceph_vg1"
Jan 31 05:24:37 np0005603787 great_allen[246633]:        }
Jan 31 05:24:37 np0005603787 great_allen[246633]:    ],
Jan 31 05:24:37 np0005603787 great_allen[246633]:    "2": [
Jan 31 05:24:37 np0005603787 great_allen[246633]:        {
Jan 31 05:24:37 np0005603787 great_allen[246633]:            "devices": [
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "/dev/loop5"
Jan 31 05:24:37 np0005603787 great_allen[246633]:            ],
Jan 31 05:24:37 np0005603787 great_allen[246633]:            "lv_name": "ceph_lv2",
Jan 31 05:24:37 np0005603787 great_allen[246633]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:24:37 np0005603787 great_allen[246633]:            "lv_size": "21470642176",
Jan 31 05:24:37 np0005603787 great_allen[246633]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:24:37 np0005603787 great_allen[246633]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:24:37 np0005603787 great_allen[246633]:            "name": "ceph_lv2",
Jan 31 05:24:37 np0005603787 great_allen[246633]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:24:37 np0005603787 great_allen[246633]:            "tags": {
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "ceph.cluster_name": "ceph",
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "ceph.crush_device_class": "",
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "ceph.encrypted": "0",
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "ceph.objectstore": "bluestore",
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "ceph.osd_id": "2",
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "ceph.type": "block",
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "ceph.vdo": "0",
Jan 31 05:24:37 np0005603787 great_allen[246633]:                "ceph.with_tpm": "0"
Jan 31 05:24:37 np0005603787 great_allen[246633]:            },
Jan 31 05:24:37 np0005603787 great_allen[246633]:            "type": "block",
Jan 31 05:24:37 np0005603787 great_allen[246633]:            "vg_name": "ceph_vg2"
Jan 31 05:24:37 np0005603787 great_allen[246633]:        }
Jan 31 05:24:37 np0005603787 great_allen[246633]:    ]
Jan 31 05:24:37 np0005603787 great_allen[246633]: }
Jan 31 05:24:37 np0005603787 systemd[1]: libpod-76532a051748535e538dc2d6c596baff755f4e8dfd06c4566a3a8f0c8054b111.scope: Deactivated successfully.
Jan 31 05:24:37 np0005603787 podman[246617]: 2026-01-31 10:24:37.03825376 +0000 UTC m=+0.493926073 container died 76532a051748535e538dc2d6c596baff755f4e8dfd06c4566a3a8f0c8054b111 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_allen, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 05:24:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:24:37.066 154765 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:24:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:24:37.069 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:24:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:24:37.069 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:24:37 np0005603787 systemd[1]: var-lib-containers-storage-overlay-3aa1a89c84c919681205fa2e86b3c1eb281e784928503184b4ee6a5b0d9b5eb5-merged.mount: Deactivated successfully.
Jan 31 05:24:37 np0005603787 podman[246617]: 2026-01-31 10:24:37.090556478 +0000 UTC m=+0.546228781 container remove 76532a051748535e538dc2d6c596baff755f4e8dfd06c4566a3a8f0c8054b111 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_allen, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 05:24:37 np0005603787 systemd[1]: libpod-conmon-76532a051748535e538dc2d6c596baff755f4e8dfd06c4566a3a8f0c8054b111.scope: Deactivated successfully.
Jan 31 05:24:37 np0005603787 podman[246714]: 2026-01-31 10:24:37.552782206 +0000 UTC m=+0.048238717 container create 9ac9752bf579374fe8f69c7ef79b7fedce251286917dcc9c51ff35b7c869e9a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:24:37 np0005603787 systemd[1]: Started libpod-conmon-9ac9752bf579374fe8f69c7ef79b7fedce251286917dcc9c51ff35b7c869e9a0.scope.
Jan 31 05:24:37 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:24:37 np0005603787 podman[246714]: 2026-01-31 10:24:37.531065343 +0000 UTC m=+0.026521864 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:24:37 np0005603787 podman[246714]: 2026-01-31 10:24:37.630418094 +0000 UTC m=+0.125874625 container init 9ac9752bf579374fe8f69c7ef79b7fedce251286917dcc9c51ff35b7c869e9a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Jan 31 05:24:37 np0005603787 podman[246714]: 2026-01-31 10:24:37.639028338 +0000 UTC m=+0.134484839 container start 9ac9752bf579374fe8f69c7ef79b7fedce251286917dcc9c51ff35b7c869e9a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_swirles, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 05:24:37 np0005603787 thirsty_swirles[246730]: 167 167
Jan 31 05:24:37 np0005603787 podman[246714]: 2026-01-31 10:24:37.642968266 +0000 UTC m=+0.138424817 container attach 9ac9752bf579374fe8f69c7ef79b7fedce251286917dcc9c51ff35b7c869e9a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_swirles, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:24:37 np0005603787 systemd[1]: libpod-9ac9752bf579374fe8f69c7ef79b7fedce251286917dcc9c51ff35b7c869e9a0.scope: Deactivated successfully.
Jan 31 05:24:37 np0005603787 podman[246714]: 2026-01-31 10:24:37.644004765 +0000 UTC m=+0.139461266 container died 9ac9752bf579374fe8f69c7ef79b7fedce251286917dcc9c51ff35b7c869e9a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 05:24:37 np0005603787 systemd[1]: var-lib-containers-storage-overlay-1cf6ae440bfb8d79dc5f93d3c52458d653ab636c10a252111ed46682f8f7ae27-merged.mount: Deactivated successfully.
Jan 31 05:24:37 np0005603787 podman[246714]: 2026-01-31 10:24:37.683396309 +0000 UTC m=+0.178852790 container remove 9ac9752bf579374fe8f69c7ef79b7fedce251286917dcc9c51ff35b7c869e9a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_swirles, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:24:37 np0005603787 systemd[1]: libpod-conmon-9ac9752bf579374fe8f69c7ef79b7fedce251286917dcc9c51ff35b7c869e9a0.scope: Deactivated successfully.
Jan 31 05:24:37 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:24:37 np0005603787 podman[246755]: 2026-01-31 10:24:37.843352672 +0000 UTC m=+0.051416323 container create 844fb176fe136fb999322867bab3a2b40ed67d51f5051de257f10ef0210b8c16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 05:24:37 np0005603787 systemd[1]: Started libpod-conmon-844fb176fe136fb999322867bab3a2b40ed67d51f5051de257f10ef0210b8c16.scope.
Jan 31 05:24:37 np0005603787 podman[246755]: 2026-01-31 10:24:37.818282478 +0000 UTC m=+0.026346219 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:24:37 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:24:37 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7823d48b4afad0f4c00b82cc13ddbf4e379445ccbe6b85cc9619c23100d8d3d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:24:37 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7823d48b4afad0f4c00b82cc13ddbf4e379445ccbe6b85cc9619c23100d8d3d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:24:37 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7823d48b4afad0f4c00b82cc13ddbf4e379445ccbe6b85cc9619c23100d8d3d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:24:37 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7823d48b4afad0f4c00b82cc13ddbf4e379445ccbe6b85cc9619c23100d8d3d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:24:37 np0005603787 podman[246755]: 2026-01-31 10:24:37.940701878 +0000 UTC m=+0.148765539 container init 844fb176fe136fb999322867bab3a2b40ed67d51f5051de257f10ef0210b8c16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:24:37 np0005603787 podman[246755]: 2026-01-31 10:24:37.948421268 +0000 UTC m=+0.156484949 container start 844fb176fe136fb999322867bab3a2b40ed67d51f5051de257f10ef0210b8c16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_northcutt, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:24:37 np0005603787 podman[246755]: 2026-01-31 10:24:37.960516868 +0000 UTC m=+0.168580559 container attach 844fb176fe136fb999322867bab3a2b40ed67d51f5051de257f10ef0210b8c16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 05:24:38 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:24:38 np0005603787 lvm[246850]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:24:38 np0005603787 lvm[246850]: VG ceph_vg1 finished
Jan 31 05:24:38 np0005603787 lvm[246849]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:24:38 np0005603787 lvm[246849]: VG ceph_vg0 finished
Jan 31 05:24:38 np0005603787 lvm[246852]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:24:38 np0005603787 lvm[246852]: VG ceph_vg2 finished
Jan 31 05:24:38 np0005603787 kind_northcutt[246771]: {}
Jan 31 05:24:38 np0005603787 systemd[1]: libpod-844fb176fe136fb999322867bab3a2b40ed67d51f5051de257f10ef0210b8c16.scope: Deactivated successfully.
Jan 31 05:24:38 np0005603787 conmon[246771]: conmon 844fb176fe136fb99932 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-844fb176fe136fb999322867bab3a2b40ed67d51f5051de257f10ef0210b8c16.scope/container/memory.events
Jan 31 05:24:38 np0005603787 podman[246755]: 2026-01-31 10:24:38.66736996 +0000 UTC m=+0.875433641 container died 844fb176fe136fb999322867bab3a2b40ed67d51f5051de257f10ef0210b8c16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:24:38 np0005603787 systemd[1]: var-lib-containers-storage-overlay-7823d48b4afad0f4c00b82cc13ddbf4e379445ccbe6b85cc9619c23100d8d3d0-merged.mount: Deactivated successfully.
Jan 31 05:24:38 np0005603787 podman[246755]: 2026-01-31 10:24:38.712367707 +0000 UTC m=+0.920431368 container remove 844fb176fe136fb999322867bab3a2b40ed67d51f5051de257f10ef0210b8c16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_northcutt, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 05:24:38 np0005603787 systemd[1]: libpod-conmon-844fb176fe136fb999322867bab3a2b40ed67d51f5051de257f10ef0210b8c16.scope: Deactivated successfully.
Jan 31 05:24:38 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:24:38 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:24:38 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:24:38 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:24:39 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:24:39 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:24:39 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:24:41 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:24:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:24:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_10:24:43
Jan 31 05:24:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:24:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] do_upmap
Jan 31 05:24:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] pools ['default.rgw.log', 'backups', '.rgw.root', 'volumes', '.mgr', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'default.rgw.control']
Jan 31 05:24:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:24:43 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:24:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:24:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:24:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:24:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:24:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:24:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:24:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:24:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:24:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:24:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:24:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:24:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:24:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:24:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:24:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:24:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:24:45 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v935: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:24:47 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:24:48 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:24:49 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v937: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:24:51 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:24:53 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:24:53 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:24:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:24:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:24:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:24:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:24:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:24:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:24:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:24:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:24:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:24:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:24:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.8805523531010136e-07 of space, bias 1.0, pg target 5.641657059303041e-05 quantized to 32 (current 32)
Jan 31 05:24:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:24:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1278967097563662e-06 of space, bias 4.0, pg target 0.0013534760517076394 quantized to 16 (current 16)
Jan 31 05:24:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:24:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:24:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:24:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:24:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:24:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 05:24:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:24:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:24:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:24:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:24:55 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v940: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:24:57 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:24:58 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:24:59 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:25:01 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v943: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:25:01 np0005603787 podman[246893]: 2026-01-31 10:25:01.877141538 +0000 UTC m=+0.089893553 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 05:25:01 np0005603787 podman[246892]: 2026-01-31 10:25:01.906271578 +0000 UTC m=+0.122328322 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 05:25:03 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:25:03 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:25:04 np0005603787 nova_compute[238603]: 2026-01-31 10:25:04.118 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:25:05 np0005603787 nova_compute[238603]: 2026-01-31 10:25:05.098 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:25:05 np0005603787 nova_compute[238603]: 2026-01-31 10:25:05.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:25:05 np0005603787 nova_compute[238603]: 2026-01-31 10:25:05.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:25:05 np0005603787 nova_compute[238603]: 2026-01-31 10:25:05.103 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:25:05 np0005603787 nova_compute[238603]: 2026-01-31 10:25:05.103 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 05:25:05 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v945: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:25:06 np0005603787 nova_compute[238603]: 2026-01-31 10:25:06.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:25:07 np0005603787 nova_compute[238603]: 2026-01-31 10:25:07.101 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:25:07 np0005603787 nova_compute[238603]: 2026-01-31 10:25:07.102 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 05:25:07 np0005603787 nova_compute[238603]: 2026-01-31 10:25:07.102 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 05:25:07 np0005603787 nova_compute[238603]: 2026-01-31 10:25:07.121 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 05:25:07 np0005603787 nova_compute[238603]: 2026-01-31 10:25:07.121 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:25:07 np0005603787 nova_compute[238603]: 2026-01-31 10:25:07.156 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:25:07 np0005603787 nova_compute[238603]: 2026-01-31 10:25:07.158 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:25:07 np0005603787 nova_compute[238603]: 2026-01-31 10:25:07.158 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:25:07 np0005603787 nova_compute[238603]: 2026-01-31 10:25:07.158 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 05:25:07 np0005603787 nova_compute[238603]: 2026-01-31 10:25:07.159 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:25:07 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:25:07 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/572762432' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:25:07 np0005603787 nova_compute[238603]: 2026-01-31 10:25:07.709 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:25:07 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:25:07 np0005603787 nova_compute[238603]: 2026-01-31 10:25:07.898 238607 WARNING nova.virt.libvirt.driver [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 05:25:07 np0005603787 nova_compute[238603]: 2026-01-31 10:25:07.899 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5108MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 05:25:07 np0005603787 nova_compute[238603]: 2026-01-31 10:25:07.899 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:25:07 np0005603787 nova_compute[238603]: 2026-01-31 10:25:07.899 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:25:07 np0005603787 nova_compute[238603]: 2026-01-31 10:25:07.968 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 05:25:07 np0005603787 nova_compute[238603]: 2026-01-31 10:25:07.969 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 05:25:07 np0005603787 nova_compute[238603]: 2026-01-31 10:25:07.996 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:25:08 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:25:08 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:25:08 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/488140214' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:25:08 np0005603787 nova_compute[238603]: 2026-01-31 10:25:08.462 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:25:08 np0005603787 nova_compute[238603]: 2026-01-31 10:25:08.468 238607 DEBUG nova.compute.provider_tree [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed in ProviderTree for provider: 207962d2-1ba9-4db2-8533-2a30e7131f3e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 05:25:08 np0005603787 nova_compute[238603]: 2026-01-31 10:25:08.486 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed for provider 207962d2-1ba9-4db2-8533-2a30e7131f3e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 05:25:08 np0005603787 nova_compute[238603]: 2026-01-31 10:25:08.489 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 05:25:08 np0005603787 nova_compute[238603]: 2026-01-31 10:25:08.489 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:25:09 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:25:11 np0005603787 nova_compute[238603]: 2026-01-31 10:25:11.471 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:25:11 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:25:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:25:13 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:25:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:25:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:25:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:25:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:25:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:25:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:25:15 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:25:17 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:25:18 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:25:19 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v952: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:25:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 05:25:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3329021516' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 05:25:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 05:25:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3329021516' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 05:25:21 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:25:23 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:25:23 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:25:25 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:25:27 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:25:28 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:25:29 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:25:31 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:25:32 np0005603787 podman[246980]: 2026-01-31 10:25:32.833837763 +0000 UTC m=+0.052479465 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 05:25:32 np0005603787 podman[246979]: 2026-01-31 10:25:32.87275075 +0000 UTC m=+0.091360991 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127)
Jan 31 05:25:33 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:25:33 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:25:35 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:25:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:25:37.066 154765 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:25:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:25:37.066 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:25:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:25:37.067 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:25:37 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:25:38 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:25:39 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:25:39 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:25:39 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:25:39 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:25:39 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:25:39 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:25:39 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:25:39 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:25:39 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:25:39 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:25:39 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:25:39 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:25:39 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:25:39 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:25:40 np0005603787 podman[247161]: 2026-01-31 10:25:40.019097796 +0000 UTC m=+0.057194884 container create 914ae4481a80d8226ce3c736657bf8b6d80492d89adfaf060ead1d6ffab37536 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 05:25:40 np0005603787 systemd[1]: Started libpod-conmon-914ae4481a80d8226ce3c736657bf8b6d80492d89adfaf060ead1d6ffab37536.scope.
Jan 31 05:25:40 np0005603787 podman[247161]: 2026-01-31 10:25:39.993779979 +0000 UTC m=+0.031877157 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:25:40 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:25:40 np0005603787 podman[247161]: 2026-01-31 10:25:40.110463447 +0000 UTC m=+0.148560565 container init 914ae4481a80d8226ce3c736657bf8b6d80492d89adfaf060ead1d6ffab37536 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:25:40 np0005603787 podman[247161]: 2026-01-31 10:25:40.118743362 +0000 UTC m=+0.156840480 container start 914ae4481a80d8226ce3c736657bf8b6d80492d89adfaf060ead1d6ffab37536 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_sinoussi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 05:25:40 np0005603787 podman[247161]: 2026-01-31 10:25:40.123415448 +0000 UTC m=+0.161512566 container attach 914ae4481a80d8226ce3c736657bf8b6d80492d89adfaf060ead1d6ffab37536 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:25:40 np0005603787 wonderful_sinoussi[247177]: 167 167
Jan 31 05:25:40 np0005603787 podman[247161]: 2026-01-31 10:25:40.125908726 +0000 UTC m=+0.164005814 container died 914ae4481a80d8226ce3c736657bf8b6d80492d89adfaf060ead1d6ffab37536 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 31 05:25:40 np0005603787 systemd[1]: libpod-914ae4481a80d8226ce3c736657bf8b6d80492d89adfaf060ead1d6ffab37536.scope: Deactivated successfully.
Jan 31 05:25:40 np0005603787 systemd[1]: var-lib-containers-storage-overlay-2d6170318da4c42b52fe7d6517e4b61c5626c864794f23bbdcb3c2d55b3f46d2-merged.mount: Deactivated successfully.
Jan 31 05:25:40 np0005603787 podman[247161]: 2026-01-31 10:25:40.166597861 +0000 UTC m=+0.204694949 container remove 914ae4481a80d8226ce3c736657bf8b6d80492d89adfaf060ead1d6ffab37536 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_sinoussi, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:25:40 np0005603787 systemd[1]: libpod-conmon-914ae4481a80d8226ce3c736657bf8b6d80492d89adfaf060ead1d6ffab37536.scope: Deactivated successfully.
Jan 31 05:25:40 np0005603787 podman[247200]: 2026-01-31 10:25:40.298897034 +0000 UTC m=+0.039653789 container create 17577ebde4669d5fc05fa410fbfd28a0ec341622fe58a122b91300f072515619 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_liskov, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:25:40 np0005603787 systemd[1]: Started libpod-conmon-17577ebde4669d5fc05fa410fbfd28a0ec341622fe58a122b91300f072515619.scope.
Jan 31 05:25:40 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:25:40 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34b64ba65bde34ffc4cdaba0a3e4f9046aff5608e59df6311b216fcdd171f340/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:25:40 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34b64ba65bde34ffc4cdaba0a3e4f9046aff5608e59df6311b216fcdd171f340/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:25:40 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34b64ba65bde34ffc4cdaba0a3e4f9046aff5608e59df6311b216fcdd171f340/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:25:40 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34b64ba65bde34ffc4cdaba0a3e4f9046aff5608e59df6311b216fcdd171f340/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:25:40 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34b64ba65bde34ffc4cdaba0a3e4f9046aff5608e59df6311b216fcdd171f340/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:25:40 np0005603787 podman[247200]: 2026-01-31 10:25:40.369254754 +0000 UTC m=+0.110011509 container init 17577ebde4669d5fc05fa410fbfd28a0ec341622fe58a122b91300f072515619 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_liskov, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:25:40 np0005603787 podman[247200]: 2026-01-31 10:25:40.375229776 +0000 UTC m=+0.115986511 container start 17577ebde4669d5fc05fa410fbfd28a0ec341622fe58a122b91300f072515619 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_liskov, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:25:40 np0005603787 podman[247200]: 2026-01-31 10:25:40.280362931 +0000 UTC m=+0.021119666 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:25:40 np0005603787 podman[247200]: 2026-01-31 10:25:40.379007759 +0000 UTC m=+0.119764494 container attach 17577ebde4669d5fc05fa410fbfd28a0ec341622fe58a122b91300f072515619 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_liskov, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 05:25:40 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:25:40 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:25:40 np0005603787 stupefied_liskov[247216]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:25:40 np0005603787 stupefied_liskov[247216]: --> All data devices are unavailable
Jan 31 05:25:40 np0005603787 systemd[1]: libpod-17577ebde4669d5fc05fa410fbfd28a0ec341622fe58a122b91300f072515619.scope: Deactivated successfully.
Jan 31 05:25:40 np0005603787 podman[247200]: 2026-01-31 10:25:40.851101948 +0000 UTC m=+0.591858703 container died 17577ebde4669d5fc05fa410fbfd28a0ec341622fe58a122b91300f072515619 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_liskov, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 05:25:40 np0005603787 systemd[1]: var-lib-containers-storage-overlay-34b64ba65bde34ffc4cdaba0a3e4f9046aff5608e59df6311b216fcdd171f340-merged.mount: Deactivated successfully.
Jan 31 05:25:40 np0005603787 podman[247200]: 2026-01-31 10:25:40.898028412 +0000 UTC m=+0.638785177 container remove 17577ebde4669d5fc05fa410fbfd28a0ec341622fe58a122b91300f072515619 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_liskov, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030)
Jan 31 05:25:40 np0005603787 systemd[1]: libpod-conmon-17577ebde4669d5fc05fa410fbfd28a0ec341622fe58a122b91300f072515619.scope: Deactivated successfully.
Jan 31 05:25:41 np0005603787 podman[247309]: 2026-01-31 10:25:41.341766531 +0000 UTC m=+0.042723631 container create d1b1c1bbf7f4a1023919e019f633e6cc3a384beb481da20d84c99deb4171dc9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:25:41 np0005603787 systemd[1]: Started libpod-conmon-d1b1c1bbf7f4a1023919e019f633e6cc3a384beb481da20d84c99deb4171dc9a.scope.
Jan 31 05:25:41 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:25:41 np0005603787 podman[247309]: 2026-01-31 10:25:41.421881806 +0000 UTC m=+0.122838916 container init d1b1c1bbf7f4a1023919e019f633e6cc3a384beb481da20d84c99deb4171dc9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_curie, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:25:41 np0005603787 podman[247309]: 2026-01-31 10:25:41.326691982 +0000 UTC m=+0.027649062 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:25:41 np0005603787 podman[247309]: 2026-01-31 10:25:41.431244761 +0000 UTC m=+0.132201881 container start d1b1c1bbf7f4a1023919e019f633e6cc3a384beb481da20d84c99deb4171dc9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_curie, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 05:25:41 np0005603787 agitated_curie[247325]: 167 167
Jan 31 05:25:41 np0005603787 podman[247309]: 2026-01-31 10:25:41.435429695 +0000 UTC m=+0.136386865 container attach d1b1c1bbf7f4a1023919e019f633e6cc3a384beb481da20d84c99deb4171dc9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_curie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:25:41 np0005603787 systemd[1]: libpod-d1b1c1bbf7f4a1023919e019f633e6cc3a384beb481da20d84c99deb4171dc9a.scope: Deactivated successfully.
Jan 31 05:25:41 np0005603787 podman[247309]: 2026-01-31 10:25:41.436444051 +0000 UTC m=+0.137401131 container died d1b1c1bbf7f4a1023919e019f633e6cc3a384beb481da20d84c99deb4171dc9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_curie, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 05:25:41 np0005603787 systemd[1]: var-lib-containers-storage-overlay-7736c85bd2eaf624c93226d113d711070a434ec942806a6ae686a0e2981d1b40-merged.mount: Deactivated successfully.
Jan 31 05:25:41 np0005603787 podman[247309]: 2026-01-31 10:25:41.47061455 +0000 UTC m=+0.171571630 container remove d1b1c1bbf7f4a1023919e019f633e6cc3a384beb481da20d84c99deb4171dc9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_curie, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 05:25:41 np0005603787 systemd[1]: libpod-conmon-d1b1c1bbf7f4a1023919e019f633e6cc3a384beb481da20d84c99deb4171dc9a.scope: Deactivated successfully.
Jan 31 05:25:41 np0005603787 podman[247349]: 2026-01-31 10:25:41.625749771 +0000 UTC m=+0.049543056 container create 43aa93dacded03bc23192395be661d963e60364779644e234755c36a8924f773 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_kilby, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:25:41 np0005603787 systemd[1]: Started libpod-conmon-43aa93dacded03bc23192395be661d963e60364779644e234755c36a8924f773.scope.
Jan 31 05:25:41 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:25:41 np0005603787 podman[247349]: 2026-01-31 10:25:41.605469701 +0000 UTC m=+0.029263006 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:25:41 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/846ab9ac91840d91e3993a414114e1a3dd2d3e97417f5db06959267a057fe228/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:25:41 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/846ab9ac91840d91e3993a414114e1a3dd2d3e97417f5db06959267a057fe228/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:25:41 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/846ab9ac91840d91e3993a414114e1a3dd2d3e97417f5db06959267a057fe228/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:25:41 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/846ab9ac91840d91e3993a414114e1a3dd2d3e97417f5db06959267a057fe228/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:25:41 np0005603787 podman[247349]: 2026-01-31 10:25:41.71627272 +0000 UTC m=+0.140066065 container init 43aa93dacded03bc23192395be661d963e60364779644e234755c36a8924f773 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:25:41 np0005603787 podman[247349]: 2026-01-31 10:25:41.724175585 +0000 UTC m=+0.147968880 container start 43aa93dacded03bc23192395be661d963e60364779644e234755c36a8924f773 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 05:25:41 np0005603787 podman[247349]: 2026-01-31 10:25:41.728560964 +0000 UTC m=+0.152354259 container attach 43aa93dacded03bc23192395be661d963e60364779644e234755c36a8924f773 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_kilby, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 05:25:41 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]: {
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:    "0": [
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:        {
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:            "devices": [
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "/dev/loop3"
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:            ],
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:            "lv_name": "ceph_lv0",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:            "lv_size": "21470642176",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:            "name": "ceph_lv0",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:            "tags": {
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "ceph.cluster_name": "ceph",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "ceph.crush_device_class": "",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "ceph.encrypted": "0",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "ceph.objectstore": "bluestore",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "ceph.osd_id": "0",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "ceph.type": "block",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "ceph.vdo": "0",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "ceph.with_tpm": "0"
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:            },
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:            "type": "block",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:            "vg_name": "ceph_vg0"
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:        }
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:    ],
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:    "1": [
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:        {
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:            "devices": [
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "/dev/loop4"
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:            ],
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:            "lv_name": "ceph_lv1",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:            "lv_size": "21470642176",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:            "name": "ceph_lv1",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:            "tags": {
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "ceph.cluster_name": "ceph",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "ceph.crush_device_class": "",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "ceph.encrypted": "0",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "ceph.objectstore": "bluestore",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "ceph.osd_id": "1",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "ceph.type": "block",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "ceph.vdo": "0",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "ceph.with_tpm": "0"
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:            },
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:            "type": "block",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:            "vg_name": "ceph_vg1"
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:        }
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:    ],
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:    "2": [
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:        {
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:            "devices": [
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "/dev/loop5"
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:            ],
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:            "lv_name": "ceph_lv2",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:            "lv_size": "21470642176",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:            "name": "ceph_lv2",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:            "tags": {
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "ceph.cluster_name": "ceph",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "ceph.crush_device_class": "",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "ceph.encrypted": "0",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "ceph.objectstore": "bluestore",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "ceph.osd_id": "2",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "ceph.type": "block",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "ceph.vdo": "0",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:                "ceph.with_tpm": "0"
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:            },
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:            "type": "block",
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:            "vg_name": "ceph_vg2"
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:        }
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]:    ]
Jan 31 05:25:41 np0005603787 hopeful_kilby[247366]: }
Jan 31 05:25:42 np0005603787 systemd[1]: libpod-43aa93dacded03bc23192395be661d963e60364779644e234755c36a8924f773.scope: Deactivated successfully.
Jan 31 05:25:42 np0005603787 podman[247349]: 2026-01-31 10:25:42.018126626 +0000 UTC m=+0.441919891 container died 43aa93dacded03bc23192395be661d963e60364779644e234755c36a8924f773 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_kilby, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:25:42 np0005603787 systemd[1]: var-lib-containers-storage-overlay-846ab9ac91840d91e3993a414114e1a3dd2d3e97417f5db06959267a057fe228-merged.mount: Deactivated successfully.
Jan 31 05:25:42 np0005603787 podman[247349]: 2026-01-31 10:25:42.054746891 +0000 UTC m=+0.478540156 container remove 43aa93dacded03bc23192395be661d963e60364779644e234755c36a8924f773 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_kilby, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 05:25:42 np0005603787 systemd[1]: libpod-conmon-43aa93dacded03bc23192395be661d963e60364779644e234755c36a8924f773.scope: Deactivated successfully.
Jan 31 05:25:42 np0005603787 podman[247449]: 2026-01-31 10:25:42.546978526 +0000 UTC m=+0.048957751 container create 545f7c639912c1226723fd57a63194e2454e6878c21b455b765a7e068dcaa140 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:25:42 np0005603787 systemd[1]: Started libpod-conmon-545f7c639912c1226723fd57a63194e2454e6878c21b455b765a7e068dcaa140.scope.
Jan 31 05:25:42 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:25:42 np0005603787 podman[247449]: 2026-01-31 10:25:42.523805347 +0000 UTC m=+0.025784612 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:25:42 np0005603787 podman[247449]: 2026-01-31 10:25:42.625954141 +0000 UTC m=+0.127933406 container init 545f7c639912c1226723fd57a63194e2454e6878c21b455b765a7e068dcaa140 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_matsumoto, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 05:25:42 np0005603787 podman[247449]: 2026-01-31 10:25:42.634837181 +0000 UTC m=+0.136816396 container start 545f7c639912c1226723fd57a63194e2454e6878c21b455b765a7e068dcaa140 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 05:25:42 np0005603787 podman[247449]: 2026-01-31 10:25:42.63881547 +0000 UTC m=+0.140794695 container attach 545f7c639912c1226723fd57a63194e2454e6878c21b455b765a7e068dcaa140 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_matsumoto, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:25:42 np0005603787 pensive_matsumoto[247465]: 167 167
Jan 31 05:25:42 np0005603787 systemd[1]: libpod-545f7c639912c1226723fd57a63194e2454e6878c21b455b765a7e068dcaa140.scope: Deactivated successfully.
Jan 31 05:25:42 np0005603787 podman[247449]: 2026-01-31 10:25:42.640803263 +0000 UTC m=+0.142782488 container died 545f7c639912c1226723fd57a63194e2454e6878c21b455b765a7e068dcaa140 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:25:42 np0005603787 systemd[1]: var-lib-containers-storage-overlay-72f1347ea9c27ae78f51e704074a5e230a7955b4a7eaa78dbdc84d9c364457c3-merged.mount: Deactivated successfully.
Jan 31 05:25:42 np0005603787 podman[247449]: 2026-01-31 10:25:42.687589954 +0000 UTC m=+0.189569179 container remove 545f7c639912c1226723fd57a63194e2454e6878c21b455b765a7e068dcaa140 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 05:25:42 np0005603787 systemd[1]: libpod-conmon-545f7c639912c1226723fd57a63194e2454e6878c21b455b765a7e068dcaa140.scope: Deactivated successfully.
Jan 31 05:25:42 np0005603787 podman[247489]: 2026-01-31 10:25:42.866962265 +0000 UTC m=+0.049247588 container create ef97729fb706a0bbd4be0a00021c21dee6c937273e8d8b7ad2e1c0b3bde561d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:25:42 np0005603787 systemd[1]: Started libpod-conmon-ef97729fb706a0bbd4be0a00021c21dee6c937273e8d8b7ad2e1c0b3bde561d9.scope.
Jan 31 05:25:42 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:25:42 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/708066d6b681df15df1cf99e52971fe2d1521ca62d75684f63d29f99e9b09666/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:25:42 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/708066d6b681df15df1cf99e52971fe2d1521ca62d75684f63d29f99e9b09666/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:25:42 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/708066d6b681df15df1cf99e52971fe2d1521ca62d75684f63d29f99e9b09666/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:25:42 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/708066d6b681df15df1cf99e52971fe2d1521ca62d75684f63d29f99e9b09666/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:25:42 np0005603787 podman[247489]: 2026-01-31 10:25:42.845550283 +0000 UTC m=+0.027835636 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:25:42 np0005603787 podman[247489]: 2026-01-31 10:25:42.950657247 +0000 UTC m=+0.132942560 container init ef97729fb706a0bbd4be0a00021c21dee6c937273e8d8b7ad2e1c0b3bde561d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_merkle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:25:42 np0005603787 podman[247489]: 2026-01-31 10:25:42.954607365 +0000 UTC m=+0.136892668 container start ef97729fb706a0bbd4be0a00021c21dee6c937273e8d8b7ad2e1c0b3bde561d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:25:42 np0005603787 podman[247489]: 2026-01-31 10:25:42.959097646 +0000 UTC m=+0.141382949 container attach ef97729fb706a0bbd4be0a00021c21dee6c937273e8d8b7ad2e1c0b3bde561d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 05:25:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:25:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_10:25:43
Jan 31 05:25:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:25:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] do_upmap
Jan 31 05:25:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'images', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'backups', 'default.rgw.control', 'cephfs.cephfs.data']
Jan 31 05:25:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:25:43 np0005603787 lvm[247584]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:25:43 np0005603787 lvm[247584]: VG ceph_vg1 finished
Jan 31 05:25:43 np0005603787 lvm[247583]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:25:43 np0005603787 lvm[247583]: VG ceph_vg0 finished
Jan 31 05:25:43 np0005603787 lvm[247586]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:25:43 np0005603787 lvm[247586]: VG ceph_vg2 finished
Jan 31 05:25:43 np0005603787 nervous_merkle[247505]: {}
Jan 31 05:25:43 np0005603787 systemd[1]: libpod-ef97729fb706a0bbd4be0a00021c21dee6c937273e8d8b7ad2e1c0b3bde561d9.scope: Deactivated successfully.
Jan 31 05:25:43 np0005603787 systemd[1]: libpod-ef97729fb706a0bbd4be0a00021c21dee6c937273e8d8b7ad2e1c0b3bde561d9.scope: Consumed 1.034s CPU time.
Jan 31 05:25:43 np0005603787 podman[247489]: 2026-01-31 10:25:43.666968437 +0000 UTC m=+0.849253750 container died ef97729fb706a0bbd4be0a00021c21dee6c937273e8d8b7ad2e1c0b3bde561d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_merkle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 05:25:43 np0005603787 systemd[1]: var-lib-containers-storage-overlay-708066d6b681df15df1cf99e52971fe2d1521ca62d75684f63d29f99e9b09666-merged.mount: Deactivated successfully.
Jan 31 05:25:43 np0005603787 podman[247489]: 2026-01-31 10:25:43.708484184 +0000 UTC m=+0.890769487 container remove ef97729fb706a0bbd4be0a00021c21dee6c937273e8d8b7ad2e1c0b3bde561d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_merkle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 05:25:43 np0005603787 systemd[1]: libpod-conmon-ef97729fb706a0bbd4be0a00021c21dee6c937273e8d8b7ad2e1c0b3bde561d9.scope: Deactivated successfully.
Jan 31 05:25:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:25:43 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:25:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:25:43 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:25:43 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:25:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:25:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:25:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:25:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:25:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:25:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:25:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:25:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:25:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:25:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:25:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:25:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:25:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:25:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:25:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:25:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:25:44 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:25:44 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:25:45 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:25:47 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:25:48 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:25:49 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:25:51 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v968: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:25:53 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:25:53 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:25:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:25:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:25:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:25:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:25:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:25:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:25:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:25:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:25:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:25:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:25:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.8805523531010136e-07 of space, bias 1.0, pg target 5.641657059303041e-05 quantized to 32 (current 32)
Jan 31 05:25:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:25:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1278967097563662e-06 of space, bias 4.0, pg target 0.0013534760517076394 quantized to 16 (current 16)
Jan 31 05:25:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:25:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:25:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:25:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:25:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:25:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 05:25:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:25:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:25:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:25:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:25:55 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:25:57 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:25:58 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:25:59 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v972: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:26:01 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:26:03 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:26:03 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:26:03 np0005603787 podman[247629]: 2026-01-31 10:26:03.84307354 +0000 UTC m=+0.060527624 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:26:03 np0005603787 podman[247628]: 2026-01-31 10:26:03.860779281 +0000 UTC m=+0.076092437 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 05:26:05 np0005603787 nova_compute[238603]: 2026-01-31 10:26:05.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:26:05 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:26:06 np0005603787 nova_compute[238603]: 2026-01-31 10:26:06.098 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:26:06 np0005603787 nova_compute[238603]: 2026-01-31 10:26:06.101 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:26:06 np0005603787 nova_compute[238603]: 2026-01-31 10:26:06.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:26:06 np0005603787 nova_compute[238603]: 2026-01-31 10:26:06.102 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 05:26:07 np0005603787 nova_compute[238603]: 2026-01-31 10:26:07.103 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:26:07 np0005603787 nova_compute[238603]: 2026-01-31 10:26:07.103 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 05:26:07 np0005603787 nova_compute[238603]: 2026-01-31 10:26:07.103 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 05:26:07 np0005603787 nova_compute[238603]: 2026-01-31 10:26:07.118 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 05:26:07 np0005603787 nova_compute[238603]: 2026-01-31 10:26:07.118 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:26:07 np0005603787 nova_compute[238603]: 2026-01-31 10:26:07.119 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:26:07 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v976: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:26:08 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:26:09 np0005603787 nova_compute[238603]: 2026-01-31 10:26:09.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:26:09 np0005603787 nova_compute[238603]: 2026-01-31 10:26:09.136 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:26:09 np0005603787 nova_compute[238603]: 2026-01-31 10:26:09.136 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:26:09 np0005603787 nova_compute[238603]: 2026-01-31 10:26:09.136 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:26:09 np0005603787 nova_compute[238603]: 2026-01-31 10:26:09.136 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 05:26:09 np0005603787 nova_compute[238603]: 2026-01-31 10:26:09.137 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:26:09 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:26:09 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/445431864' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:26:09 np0005603787 nova_compute[238603]: 2026-01-31 10:26:09.653 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:26:09 np0005603787 nova_compute[238603]: 2026-01-31 10:26:09.800 238607 WARNING nova.virt.libvirt.driver [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 05:26:09 np0005603787 nova_compute[238603]: 2026-01-31 10:26:09.801 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5079MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 05:26:09 np0005603787 nova_compute[238603]: 2026-01-31 10:26:09.801 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:26:09 np0005603787 nova_compute[238603]: 2026-01-31 10:26:09.802 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:26:09 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:26:09 np0005603787 nova_compute[238603]: 2026-01-31 10:26:09.911 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 05:26:09 np0005603787 nova_compute[238603]: 2026-01-31 10:26:09.912 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 05:26:09 np0005603787 nova_compute[238603]: 2026-01-31 10:26:09.935 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:26:10 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:26:10 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1133581248' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:26:10 np0005603787 nova_compute[238603]: 2026-01-31 10:26:10.445 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:26:10 np0005603787 nova_compute[238603]: 2026-01-31 10:26:10.451 238607 DEBUG nova.compute.provider_tree [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed in ProviderTree for provider: 207962d2-1ba9-4db2-8533-2a30e7131f3e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 05:26:10 np0005603787 nova_compute[238603]: 2026-01-31 10:26:10.473 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed for provider 207962d2-1ba9-4db2-8533-2a30e7131f3e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 05:26:10 np0005603787 nova_compute[238603]: 2026-01-31 10:26:10.474 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 05:26:10 np0005603787 nova_compute[238603]: 2026-01-31 10:26:10.475 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:26:11 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v978: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:26:12 np0005603787 nova_compute[238603]: 2026-01-31 10:26:12.471 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:26:12 np0005603787 nova_compute[238603]: 2026-01-31 10:26:12.495 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:26:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:26:13 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v979: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:26:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:26:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:26:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:26:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:26:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:26:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:26:15 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:26:17 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:26:18 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:26:19 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:26:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Jan 31 05:26:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Jan 31 05:26:21 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Jan 31 05:26:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 05:26:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1751087923' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 05:26:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 05:26:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1751087923' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 05:26:21 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v984: 305 pgs: 305 active+clean; 8.5 MiB data, 145 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s rd, 820 KiB/s wr, 10 op/s
Jan 31 05:26:22 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Jan 31 05:26:22 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Jan 31 05:26:22 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Jan 31 05:26:23 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:26:23 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 37 MiB data, 173 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 4.6 MiB/s wr, 36 op/s
Jan 31 05:26:25 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Jan 31 05:26:27 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Jan 31 05:26:28 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:26:29 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.7 MiB/s wr, 31 op/s
Jan 31 05:26:31 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v990: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.3 MiB/s wr, 27 op/s
Jan 31 05:26:33 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Jan 31 05:26:33 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Jan 31 05:26:33 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Jan 31 05:26:33 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 7.1 KiB/s rd, 455 KiB/s wr, 10 op/s
Jan 31 05:26:34 np0005603787 podman[247720]: 2026-01-31 10:26:34.857380913 +0000 UTC m=+0.064997336 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 05:26:34 np0005603787 podman[247719]: 2026-01-31 10:26:34.877995553 +0000 UTC m=+0.089092031 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Jan 31 05:26:35 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 29 MiB data, 165 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.1 KiB/s wr, 24 op/s
Jan 31 05:26:36 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Jan 31 05:26:36 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Jan 31 05:26:36 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Jan 31 05:26:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:26:37.067 154765 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:26:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:26:37.067 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:26:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:26:37.067 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:26:37 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 29 MiB data, 165 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 30 op/s
Jan 31 05:26:38 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:26:39 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 61 op/s
Jan 31 05:26:41 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 3.2 KiB/s wr, 57 op/s
Jan 31 05:26:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:26:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Jan 31 05:26:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Jan 31 05:26:43 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Jan 31 05:26:43 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Jan 31 05:26:43 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:26:43.106466) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 05:26:43 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Jan 31 05:26:43 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769855203106503, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1395, "num_deletes": 252, "total_data_size": 2177780, "memory_usage": 2215744, "flush_reason": "Manual Compaction"}
Jan 31 05:26:43 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Jan 31 05:26:43 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769855203125915, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 2145696, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19687, "largest_seqno": 21081, "table_properties": {"data_size": 2139098, "index_size": 3797, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13724, "raw_average_key_size": 19, "raw_value_size": 2125773, "raw_average_value_size": 3085, "num_data_blocks": 173, "num_entries": 689, "num_filter_entries": 689, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769855063, "oldest_key_time": 1769855063, "file_creation_time": 1769855203, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a9067dea-12fb-43c2-8d5c-dbf66227f0e8", "db_session_id": "EXKALWXQ4I64EWVUMKE5", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:26:43 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 19505 microseconds, and 5810 cpu microseconds.
Jan 31 05:26:43 np0005603787 ceph-mon[75160]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 05:26:43 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:26:43.125970) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 2145696 bytes OK
Jan 31 05:26:43 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:26:43.125992) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Jan 31 05:26:43 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:26:43.127954) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Jan 31 05:26:43 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:26:43.127975) EVENT_LOG_v1 {"time_micros": 1769855203127968, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 05:26:43 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:26:43.127998) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 05:26:43 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2171591, prev total WAL file size 2171591, number of live WAL files 2.
Jan 31 05:26:43 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:26:43 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:26:43.128594) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Jan 31 05:26:43 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 05:26:43 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(2095KB)], [47(7234KB)]
Jan 31 05:26:43 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769855203128641, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9553772, "oldest_snapshot_seqno": -1}
Jan 31 05:26:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_10:26:43
Jan 31 05:26:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:26:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] do_upmap
Jan 31 05:26:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] pools ['images', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'volumes', 'vms', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta']
Jan 31 05:26:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:26:43 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4408 keys, 7782972 bytes, temperature: kUnknown
Jan 31 05:26:43 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769855203167671, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7782972, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7752246, "index_size": 18584, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11077, "raw_key_size": 108974, "raw_average_key_size": 24, "raw_value_size": 7671233, "raw_average_value_size": 1740, "num_data_blocks": 777, "num_entries": 4408, "num_filter_entries": 4408, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769853439, "oldest_key_time": 0, "file_creation_time": 1769855203, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a9067dea-12fb-43c2-8d5c-dbf66227f0e8", "db_session_id": "EXKALWXQ4I64EWVUMKE5", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:26:43 np0005603787 ceph-mon[75160]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 05:26:43 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:26:43.167876) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7782972 bytes
Jan 31 05:26:43 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:26:43.169377) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 244.4 rd, 199.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 7.1 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(8.1) write-amplify(3.6) OK, records in: 4928, records dropped: 520 output_compression: NoCompression
Jan 31 05:26:43 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:26:43.169391) EVENT_LOG_v1 {"time_micros": 1769855203169383, "job": 24, "event": "compaction_finished", "compaction_time_micros": 39095, "compaction_time_cpu_micros": 22135, "output_level": 6, "num_output_files": 1, "total_output_size": 7782972, "num_input_records": 4928, "num_output_records": 4408, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 05:26:43 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:26:43 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769855203169715, "job": 24, "event": "table_file_deletion", "file_number": 49}
Jan 31 05:26:43 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:26:43 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769855203170551, "job": 24, "event": "table_file_deletion", "file_number": 47}
Jan 31 05:26:43 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:26:43.128550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:26:43 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:26:43.170620) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:26:43 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:26:43.170627) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:26:43 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:26:43.170630) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:26:43 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:26:43.170633) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:26:43 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:26:43.170636) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:26:43 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.1 KiB/s wr, 32 op/s
Jan 31 05:26:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:26:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:26:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:26:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:26:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:26:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:26:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:26:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:26:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:26:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:26:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:26:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:26:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:26:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:26:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:26:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:26:44 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:26:44 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:26:44 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:26:44 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:26:44 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:26:44 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:26:44 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:26:44 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:26:44 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:26:44 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:26:44 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:26:44 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:26:44 np0005603787 podman[247909]: 2026-01-31 10:26:44.82893239 +0000 UTC m=+0.062059506 container create d1c510d4667c635c27fc3f84ddf03067d696e7c435206caf5dc7111f7f9e5e99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:26:44 np0005603787 systemd[1]: Started libpod-conmon-d1c510d4667c635c27fc3f84ddf03067d696e7c435206caf5dc7111f7f9e5e99.scope.
Jan 31 05:26:44 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:26:44 np0005603787 podman[247909]: 2026-01-31 10:26:44.802529313 +0000 UTC m=+0.035656489 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:26:44 np0005603787 podman[247909]: 2026-01-31 10:26:44.909462556 +0000 UTC m=+0.142589642 container init d1c510d4667c635c27fc3f84ddf03067d696e7c435206caf5dc7111f7f9e5e99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 31 05:26:44 np0005603787 podman[247909]: 2026-01-31 10:26:44.916540438 +0000 UTC m=+0.149667514 container start d1c510d4667c635c27fc3f84ddf03067d696e7c435206caf5dc7111f7f9e5e99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 05:26:44 np0005603787 podman[247909]: 2026-01-31 10:26:44.92025589 +0000 UTC m=+0.153382996 container attach d1c510d4667c635c27fc3f84ddf03067d696e7c435206caf5dc7111f7f9e5e99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mirzakhani, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 05:26:44 np0005603787 sweet_mirzakhani[247926]: 167 167
Jan 31 05:26:44 np0005603787 systemd[1]: libpod-d1c510d4667c635c27fc3f84ddf03067d696e7c435206caf5dc7111f7f9e5e99.scope: Deactivated successfully.
Jan 31 05:26:44 np0005603787 conmon[247926]: conmon d1c510d4667c635c27fc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d1c510d4667c635c27fc3f84ddf03067d696e7c435206caf5dc7111f7f9e5e99.scope/container/memory.events
Jan 31 05:26:44 np0005603787 podman[247909]: 2026-01-31 10:26:44.923168889 +0000 UTC m=+0.156295955 container died d1c510d4667c635c27fc3f84ddf03067d696e7c435206caf5dc7111f7f9e5e99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mirzakhani, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:26:44 np0005603787 systemd[1]: var-lib-containers-storage-overlay-1c9241510c02083b447e3367a808363a3e0a444116624b1447a773f5b733f90e-merged.mount: Deactivated successfully.
Jan 31 05:26:44 np0005603787 podman[247909]: 2026-01-31 10:26:44.967105352 +0000 UTC m=+0.200232458 container remove d1c510d4667c635c27fc3f84ddf03067d696e7c435206caf5dc7111f7f9e5e99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mirzakhani, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:26:44 np0005603787 systemd[1]: libpod-conmon-d1c510d4667c635c27fc3f84ddf03067d696e7c435206caf5dc7111f7f9e5e99.scope: Deactivated successfully.
Jan 31 05:26:45 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:26:45 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:26:45 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:26:45 np0005603787 podman[247950]: 2026-01-31 10:26:45.164120111 +0000 UTC m=+0.068205512 container create 8a7bc5ed395cdd2a508d2da2f212e0dff66f4fbd551ed7eb5af45b3260323e17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bohr, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 05:26:45 np0005603787 systemd[1]: Started libpod-conmon-8a7bc5ed395cdd2a508d2da2f212e0dff66f4fbd551ed7eb5af45b3260323e17.scope.
Jan 31 05:26:45 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:26:45 np0005603787 podman[247950]: 2026-01-31 10:26:45.136621145 +0000 UTC m=+0.040706606 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:26:45 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60de23e9c52a83f36db30d76c1699309f1cca9e93e8fe17e18dc8c5616a899d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:26:45 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60de23e9c52a83f36db30d76c1699309f1cca9e93e8fe17e18dc8c5616a899d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:26:45 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60de23e9c52a83f36db30d76c1699309f1cca9e93e8fe17e18dc8c5616a899d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:26:45 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60de23e9c52a83f36db30d76c1699309f1cca9e93e8fe17e18dc8c5616a899d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:26:45 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60de23e9c52a83f36db30d76c1699309f1cca9e93e8fe17e18dc8c5616a899d9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:26:45 np0005603787 podman[247950]: 2026-01-31 10:26:45.254129206 +0000 UTC m=+0.158214667 container init 8a7bc5ed395cdd2a508d2da2f212e0dff66f4fbd551ed7eb5af45b3260323e17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bohr, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:26:45 np0005603787 podman[247950]: 2026-01-31 10:26:45.267613701 +0000 UTC m=+0.171699112 container start 8a7bc5ed395cdd2a508d2da2f212e0dff66f4fbd551ed7eb5af45b3260323e17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:26:45 np0005603787 podman[247950]: 2026-01-31 10:26:45.272204137 +0000 UTC m=+0.176289548 container attach 8a7bc5ed395cdd2a508d2da2f212e0dff66f4fbd551ed7eb5af45b3260323e17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 05:26:45 np0005603787 hardcore_bohr[247966]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:26:45 np0005603787 hardcore_bohr[247966]: --> All data devices are unavailable
Jan 31 05:26:45 np0005603787 systemd[1]: libpod-8a7bc5ed395cdd2a508d2da2f212e0dff66f4fbd551ed7eb5af45b3260323e17.scope: Deactivated successfully.
Jan 31 05:26:45 np0005603787 podman[247950]: 2026-01-31 10:26:45.773284962 +0000 UTC m=+0.677370363 container died 8a7bc5ed395cdd2a508d2da2f212e0dff66f4fbd551ed7eb5af45b3260323e17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bohr, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 05:26:45 np0005603787 systemd[1]: var-lib-containers-storage-overlay-60de23e9c52a83f36db30d76c1699309f1cca9e93e8fe17e18dc8c5616a899d9-merged.mount: Deactivated successfully.
Jan 31 05:26:45 np0005603787 podman[247950]: 2026-01-31 10:26:45.82588373 +0000 UTC m=+0.729969111 container remove 8a7bc5ed395cdd2a508d2da2f212e0dff66f4fbd551ed7eb5af45b3260323e17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bohr, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 31 05:26:45 np0005603787 systemd[1]: libpod-conmon-8a7bc5ed395cdd2a508d2da2f212e0dff66f4fbd551ed7eb5af45b3260323e17.scope: Deactivated successfully.
Jan 31 05:26:45 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.7 KiB/s wr, 27 op/s
Jan 31 05:26:46 np0005603787 podman[248063]: 2026-01-31 10:26:46.309671746 +0000 UTC m=+0.060231096 container create 56b98ecfbe716fe50eb4e1d01735e838c1b8004b29c29d96837c7cf0b38f7f7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_sanderson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:26:46 np0005603787 systemd[1]: Started libpod-conmon-56b98ecfbe716fe50eb4e1d01735e838c1b8004b29c29d96837c7cf0b38f7f7c.scope.
Jan 31 05:26:46 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:26:46 np0005603787 podman[248063]: 2026-01-31 10:26:46.284818012 +0000 UTC m=+0.035377412 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:26:46 np0005603787 podman[248063]: 2026-01-31 10:26:46.38936561 +0000 UTC m=+0.139924980 container init 56b98ecfbe716fe50eb4e1d01735e838c1b8004b29c29d96837c7cf0b38f7f7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_sanderson, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:26:46 np0005603787 podman[248063]: 2026-01-31 10:26:46.395485937 +0000 UTC m=+0.146045257 container start 56b98ecfbe716fe50eb4e1d01735e838c1b8004b29c29d96837c7cf0b38f7f7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 05:26:46 np0005603787 podman[248063]: 2026-01-31 10:26:46.399720822 +0000 UTC m=+0.150280252 container attach 56b98ecfbe716fe50eb4e1d01735e838c1b8004b29c29d96837c7cf0b38f7f7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:26:46 np0005603787 gifted_sanderson[248079]: 167 167
Jan 31 05:26:46 np0005603787 systemd[1]: libpod-56b98ecfbe716fe50eb4e1d01735e838c1b8004b29c29d96837c7cf0b38f7f7c.scope: Deactivated successfully.
Jan 31 05:26:46 np0005603787 podman[248063]: 2026-01-31 10:26:46.401960642 +0000 UTC m=+0.152519982 container died 56b98ecfbe716fe50eb4e1d01735e838c1b8004b29c29d96837c7cf0b38f7f7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:26:46 np0005603787 systemd[1]: var-lib-containers-storage-overlay-70d556329916530a743e1d91721470aa4120beb68fc7187fac4c85e572cce589-merged.mount: Deactivated successfully.
Jan 31 05:26:46 np0005603787 podman[248063]: 2026-01-31 10:26:46.452178126 +0000 UTC m=+0.202737486 container remove 56b98ecfbe716fe50eb4e1d01735e838c1b8004b29c29d96837c7cf0b38f7f7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_sanderson, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 05:26:46 np0005603787 systemd[1]: libpod-conmon-56b98ecfbe716fe50eb4e1d01735e838c1b8004b29c29d96837c7cf0b38f7f7c.scope: Deactivated successfully.
Jan 31 05:26:46 np0005603787 podman[248104]: 2026-01-31 10:26:46.639946124 +0000 UTC m=+0.056651919 container create cf066ec73a3b8d86f42320f0374f10d27402b8d1357f1c3ad7cb0d38fd326b80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_varahamihira, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 05:26:46 np0005603787 systemd[1]: Started libpod-conmon-cf066ec73a3b8d86f42320f0374f10d27402b8d1357f1c3ad7cb0d38fd326b80.scope.
Jan 31 05:26:46 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:26:46 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7e4c9f4e177f0501e2d7d7ddd60e9df5fb2bab073957984764fa5508490bc45/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:26:46 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7e4c9f4e177f0501e2d7d7ddd60e9df5fb2bab073957984764fa5508490bc45/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:26:46 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7e4c9f4e177f0501e2d7d7ddd60e9df5fb2bab073957984764fa5508490bc45/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:26:46 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7e4c9f4e177f0501e2d7d7ddd60e9df5fb2bab073957984764fa5508490bc45/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:26:46 np0005603787 podman[248104]: 2026-01-31 10:26:46.616509589 +0000 UTC m=+0.033215424 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:26:46 np0005603787 podman[248104]: 2026-01-31 10:26:46.740800053 +0000 UTC m=+0.157505888 container init cf066ec73a3b8d86f42320f0374f10d27402b8d1357f1c3ad7cb0d38fd326b80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:26:46 np0005603787 podman[248104]: 2026-01-31 10:26:46.749409477 +0000 UTC m=+0.166115272 container start cf066ec73a3b8d86f42320f0374f10d27402b8d1357f1c3ad7cb0d38fd326b80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:26:46 np0005603787 podman[248104]: 2026-01-31 10:26:46.75321464 +0000 UTC m=+0.169920415 container attach cf066ec73a3b8d86f42320f0374f10d27402b8d1357f1c3ad7cb0d38fd326b80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]: {
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:    "0": [
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:        {
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:            "devices": [
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "/dev/loop3"
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:            ],
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:            "lv_name": "ceph_lv0",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:            "lv_size": "21470642176",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:            "name": "ceph_lv0",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:            "tags": {
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "ceph.cluster_name": "ceph",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "ceph.crush_device_class": "",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "ceph.encrypted": "0",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "ceph.objectstore": "bluestore",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "ceph.osd_id": "0",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "ceph.type": "block",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "ceph.vdo": "0",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "ceph.with_tpm": "0"
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:            },
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:            "type": "block",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:            "vg_name": "ceph_vg0"
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:        }
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:    ],
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:    "1": [
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:        {
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:            "devices": [
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "/dev/loop4"
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:            ],
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:            "lv_name": "ceph_lv1",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:            "lv_size": "21470642176",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:            "name": "ceph_lv1",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:            "tags": {
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "ceph.cluster_name": "ceph",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "ceph.crush_device_class": "",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "ceph.encrypted": "0",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "ceph.objectstore": "bluestore",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "ceph.osd_id": "1",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "ceph.type": "block",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "ceph.vdo": "0",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "ceph.with_tpm": "0"
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:            },
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:            "type": "block",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:            "vg_name": "ceph_vg1"
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:        }
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:    ],
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:    "2": [
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:        {
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:            "devices": [
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "/dev/loop5"
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:            ],
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:            "lv_name": "ceph_lv2",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:            "lv_size": "21470642176",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:            "name": "ceph_lv2",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:            "tags": {
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "ceph.cluster_name": "ceph",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "ceph.crush_device_class": "",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "ceph.encrypted": "0",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "ceph.objectstore": "bluestore",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "ceph.osd_id": "2",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "ceph.type": "block",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "ceph.vdo": "0",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:                "ceph.with_tpm": "0"
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:            },
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:            "type": "block",
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:            "vg_name": "ceph_vg2"
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:        }
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]:    ]
Jan 31 05:26:47 np0005603787 quizzical_varahamihira[248120]: }
Jan 31 05:26:47 np0005603787 systemd[1]: libpod-cf066ec73a3b8d86f42320f0374f10d27402b8d1357f1c3ad7cb0d38fd326b80.scope: Deactivated successfully.
Jan 31 05:26:47 np0005603787 podman[248104]: 2026-01-31 10:26:47.050242005 +0000 UTC m=+0.466947800 container died cf066ec73a3b8d86f42320f0374f10d27402b8d1357f1c3ad7cb0d38fd326b80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 05:26:47 np0005603787 systemd[1]: var-lib-containers-storage-overlay-f7e4c9f4e177f0501e2d7d7ddd60e9df5fb2bab073957984764fa5508490bc45-merged.mount: Deactivated successfully.
Jan 31 05:26:47 np0005603787 podman[248104]: 2026-01-31 10:26:47.095174075 +0000 UTC m=+0.511879860 container remove cf066ec73a3b8d86f42320f0374f10d27402b8d1357f1c3ad7cb0d38fd326b80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_varahamihira, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:26:47 np0005603787 systemd[1]: libpod-conmon-cf066ec73a3b8d86f42320f0374f10d27402b8d1357f1c3ad7cb0d38fd326b80.scope: Deactivated successfully.
Jan 31 05:26:47 np0005603787 podman[248203]: 2026-01-31 10:26:47.568446037 +0000 UTC m=+0.051472339 container create 98e89089795eff12f30f84f0cffb2fc01a54bbaa06c36ddc47be6bd85bd71fd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_bardeen, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:26:47 np0005603787 systemd[1]: Started libpod-conmon-98e89089795eff12f30f84f0cffb2fc01a54bbaa06c36ddc47be6bd85bd71fd1.scope.
Jan 31 05:26:47 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:26:47 np0005603787 podman[248203]: 2026-01-31 10:26:47.640748319 +0000 UTC m=+0.123774671 container init 98e89089795eff12f30f84f0cffb2fc01a54bbaa06c36ddc47be6bd85bd71fd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 05:26:47 np0005603787 podman[248203]: 2026-01-31 10:26:47.548386761 +0000 UTC m=+0.031413123 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:26:47 np0005603787 podman[248203]: 2026-01-31 10:26:47.651237985 +0000 UTC m=+0.134264287 container start 98e89089795eff12f30f84f0cffb2fc01a54bbaa06c36ddc47be6bd85bd71fd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_bardeen, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 05:26:47 np0005603787 podman[248203]: 2026-01-31 10:26:47.655367717 +0000 UTC m=+0.138394109 container attach 98e89089795eff12f30f84f0cffb2fc01a54bbaa06c36ddc47be6bd85bd71fd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:26:47 np0005603787 jolly_bardeen[248219]: 167 167
Jan 31 05:26:47 np0005603787 systemd[1]: libpod-98e89089795eff12f30f84f0cffb2fc01a54bbaa06c36ddc47be6bd85bd71fd1.scope: Deactivated successfully.
Jan 31 05:26:47 np0005603787 conmon[248219]: conmon 98e89089795eff12f30f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-98e89089795eff12f30f84f0cffb2fc01a54bbaa06c36ddc47be6bd85bd71fd1.scope/container/memory.events
Jan 31 05:26:47 np0005603787 podman[248203]: 2026-01-31 10:26:47.660057084 +0000 UTC m=+0.143083386 container died 98e89089795eff12f30f84f0cffb2fc01a54bbaa06c36ddc47be6bd85bd71fd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:26:47 np0005603787 systemd[1]: var-lib-containers-storage-overlay-1fa4eb4501f8e75b9150728bf9130a1d1d6aba4814151d476f55efd105ba38de-merged.mount: Deactivated successfully.
Jan 31 05:26:47 np0005603787 podman[248203]: 2026-01-31 10:26:47.696817502 +0000 UTC m=+0.179843794 container remove 98e89089795eff12f30f84f0cffb2fc01a54bbaa06c36ddc47be6bd85bd71fd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 05:26:47 np0005603787 systemd[1]: libpod-conmon-98e89089795eff12f30f84f0cffb2fc01a54bbaa06c36ddc47be6bd85bd71fd1.scope: Deactivated successfully.
Jan 31 05:26:47 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.7 KiB/s wr, 26 op/s
Jan 31 05:26:47 np0005603787 podman[248242]: 2026-01-31 10:26:47.850351851 +0000 UTC m=+0.039942076 container create e36effc45a1d067ba3cf13f347aad66b4ecde86245a2f85b7e9da79015a1ff04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 05:26:47 np0005603787 systemd[1]: Started libpod-conmon-e36effc45a1d067ba3cf13f347aad66b4ecde86245a2f85b7e9da79015a1ff04.scope.
Jan 31 05:26:47 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:26:47 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd778295e20fba9c292994190d05d0e9f6257bec4394cf2bb020f8adf465b383/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:26:47 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd778295e20fba9c292994190d05d0e9f6257bec4394cf2bb020f8adf465b383/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:26:47 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd778295e20fba9c292994190d05d0e9f6257bec4394cf2bb020f8adf465b383/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:26:47 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd778295e20fba9c292994190d05d0e9f6257bec4394cf2bb020f8adf465b383/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:26:47 np0005603787 podman[248242]: 2026-01-31 10:26:47.830692307 +0000 UTC m=+0.020282542 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:26:47 np0005603787 podman[248242]: 2026-01-31 10:26:47.941700801 +0000 UTC m=+0.131291096 container init e36effc45a1d067ba3cf13f347aad66b4ecde86245a2f85b7e9da79015a1ff04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 05:26:47 np0005603787 podman[248242]: 2026-01-31 10:26:47.955440994 +0000 UTC m=+0.145031249 container start e36effc45a1d067ba3cf13f347aad66b4ecde86245a2f85b7e9da79015a1ff04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 05:26:47 np0005603787 podman[248242]: 2026-01-31 10:26:47.967217864 +0000 UTC m=+0.156808189 container attach e36effc45a1d067ba3cf13f347aad66b4ecde86245a2f85b7e9da79015a1ff04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_hertz, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:26:48 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:26:48 np0005603787 lvm[248334]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:26:48 np0005603787 lvm[248334]: VG ceph_vg0 finished
Jan 31 05:26:48 np0005603787 lvm[248337]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:26:48 np0005603787 lvm[248337]: VG ceph_vg1 finished
Jan 31 05:26:48 np0005603787 lvm[248339]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:26:48 np0005603787 lvm[248339]: VG ceph_vg2 finished
Jan 31 05:26:48 np0005603787 dreamy_hertz[248258]: {}
Jan 31 05:26:48 np0005603787 systemd[1]: libpod-e36effc45a1d067ba3cf13f347aad66b4ecde86245a2f85b7e9da79015a1ff04.scope: Deactivated successfully.
Jan 31 05:26:48 np0005603787 systemd[1]: libpod-e36effc45a1d067ba3cf13f347aad66b4ecde86245a2f85b7e9da79015a1ff04.scope: Consumed 1.007s CPU time.
Jan 31 05:26:48 np0005603787 podman[248242]: 2026-01-31 10:26:48.717036764 +0000 UTC m=+0.906626989 container died e36effc45a1d067ba3cf13f347aad66b4ecde86245a2f85b7e9da79015a1ff04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_hertz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:26:48 np0005603787 systemd[1]: var-lib-containers-storage-overlay-cd778295e20fba9c292994190d05d0e9f6257bec4394cf2bb020f8adf465b383-merged.mount: Deactivated successfully.
Jan 31 05:26:48 np0005603787 podman[248242]: 2026-01-31 10:26:48.761898302 +0000 UTC m=+0.951488557 container remove e36effc45a1d067ba3cf13f347aad66b4ecde86245a2f85b7e9da79015a1ff04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_hertz, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 05:26:48 np0005603787 systemd[1]: libpod-conmon-e36effc45a1d067ba3cf13f347aad66b4ecde86245a2f85b7e9da79015a1ff04.scope: Deactivated successfully.
Jan 31 05:26:48 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:26:48 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:26:48 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:26:48 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:26:49 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:26:49 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:26:49 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 0 B/s wr, 0 op/s
Jan 31 05:26:51 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:26:53 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:26:53 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:26:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:26:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:26:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:26:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:26:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:26:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:26:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:26:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:26:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:26:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:26:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 2.450943614167069e-07 of space, bias 1.0, pg target 7.352830842501207e-05 quantized to 32 (current 32)
Jan 31 05:26:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:26:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.527403468629877e-06 of space, bias 4.0, pg target 0.0018328841623558524 quantized to 16 (current 16)
Jan 31 05:26:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:26:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:26:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:26:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:26:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:26:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 05:26:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:26:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:26:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:26:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:26:55 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:26:57 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:26:58 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:26:59 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1007: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:27:01 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:27:03 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:27:03 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:27:05 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:27:05 np0005603787 podman[248378]: 2026-01-31 10:27:05.865725746 +0000 UTC m=+0.076062696 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:27:05 np0005603787 podman[248379]: 2026-01-31 10:27:05.866278861 +0000 UTC m=+0.074864363 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 05:27:06 np0005603787 nova_compute[238603]: 2026-01-31 10:27:06.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:27:06 np0005603787 nova_compute[238603]: 2026-01-31 10:27:06.103 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:27:06 np0005603787 nova_compute[238603]: 2026-01-31 10:27:06.103 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 05:27:07 np0005603787 nova_compute[238603]: 2026-01-31 10:27:07.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:27:07 np0005603787 nova_compute[238603]: 2026-01-31 10:27:07.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:27:07 np0005603787 nova_compute[238603]: 2026-01-31 10:27:07.103 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:27:07 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:27:08 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:27:08 np0005603787 nova_compute[238603]: 2026-01-31 10:27:08.098 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:27:08 np0005603787 nova_compute[238603]: 2026-01-31 10:27:08.101 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:27:08 np0005603787 nova_compute[238603]: 2026-01-31 10:27:08.101 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 05:27:08 np0005603787 nova_compute[238603]: 2026-01-31 10:27:08.101 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 05:27:08 np0005603787 nova_compute[238603]: 2026-01-31 10:27:08.119 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 05:27:09 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:27:11 np0005603787 nova_compute[238603]: 2026-01-31 10:27:11.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:27:11 np0005603787 nova_compute[238603]: 2026-01-31 10:27:11.138 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:27:11 np0005603787 nova_compute[238603]: 2026-01-31 10:27:11.138 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:27:11 np0005603787 nova_compute[238603]: 2026-01-31 10:27:11.138 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:27:11 np0005603787 nova_compute[238603]: 2026-01-31 10:27:11.139 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 05:27:11 np0005603787 nova_compute[238603]: 2026-01-31 10:27:11.139 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:27:11 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:27:11 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/330963221' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:27:11 np0005603787 nova_compute[238603]: 2026-01-31 10:27:11.754 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:27:11 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:27:11 np0005603787 nova_compute[238603]: 2026-01-31 10:27:11.897 238607 WARNING nova.virt.libvirt.driver [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 05:27:11 np0005603787 nova_compute[238603]: 2026-01-31 10:27:11.898 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5086MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 05:27:11 np0005603787 nova_compute[238603]: 2026-01-31 10:27:11.898 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:27:11 np0005603787 nova_compute[238603]: 2026-01-31 10:27:11.898 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:27:11 np0005603787 nova_compute[238603]: 2026-01-31 10:27:11.969 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 05:27:11 np0005603787 nova_compute[238603]: 2026-01-31 10:27:11.970 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 05:27:11 np0005603787 nova_compute[238603]: 2026-01-31 10:27:11.996 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:27:12 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:27:12 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3588912161' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:27:12 np0005603787 nova_compute[238603]: 2026-01-31 10:27:12.545 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:27:12 np0005603787 nova_compute[238603]: 2026-01-31 10:27:12.552 238607 DEBUG nova.compute.provider_tree [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed in ProviderTree for provider: 207962d2-1ba9-4db2-8533-2a30e7131f3e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 05:27:12 np0005603787 nova_compute[238603]: 2026-01-31 10:27:12.571 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed for provider 207962d2-1ba9-4db2-8533-2a30e7131f3e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 05:27:12 np0005603787 nova_compute[238603]: 2026-01-31 10:27:12.574 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 05:27:12 np0005603787 nova_compute[238603]: 2026-01-31 10:27:12.575 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:27:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:27:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:27:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:27:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:27:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:27:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:27:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:27:13 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:27:15 np0005603787 nova_compute[238603]: 2026-01-31 10:27:15.577 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:27:15 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:27:17 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:27:18 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:27:19 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:27:21 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 05:27:21 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 4727 writes, 21K keys, 4727 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 4727 writes, 4727 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1354 writes, 6118 keys, 1354 commit groups, 1.0 writes per commit group, ingest: 8.88 MB, 0.01 MB/s#012Interval WAL: 1354 writes, 1354 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    130.0      0.19              0.06        12    0.016       0      0       0.0       0.0#012  L6      1/0    7.42 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    185.8    152.2      0.53              0.18        11    0.048     48K   5789       0.0       0.0#012 Sum      1/0    7.42 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2    135.9    146.3      0.72              0.24        23    0.031     48K   5789       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.2    137.7    138.9      0.33              0.11        10    0.033     24K   2589       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    185.8    152.2      0.53              0.18        11    0.048     48K   5789       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    132.1      0.19              0.06        11    0.017       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     15.9      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.024, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.10 GB write, 0.06 MB/s write, 0.10 GB read, 0.05 MB/s read, 0.7 seconds#012Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.04 GB read, 0.08 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b1fd4298d0#2 capacity: 304.00 MB usage: 9.31 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 9.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(570,8.91 MB,2.92971%) FilterBlock(24,144.55 KB,0.0464339%) IndexBlock(24,270.95 KB,0.0870403%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 05:27:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 05:27:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/43318060' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 05:27:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 05:27:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/43318060' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 05:27:21 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:27:23 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:27:23 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:27:25 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:27:27 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:27:28 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:27:29 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:27:31 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:27:33 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:27:33 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:27:35 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:27:36 np0005603787 podman[248467]: 2026-01-31 10:27:36.862819952 +0000 UTC m=+0.077322320 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 05:27:36 np0005603787 podman[248468]: 2026-01-31 10:27:36.863541612 +0000 UTC m=+0.068741738 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 05:27:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:27:37.068 154765 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:27:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:27:37.069 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:27:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:27:37.069 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:27:37 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:27:38 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:27:39 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:27:41 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:27:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:27:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_10:27:43
Jan 31 05:27:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:27:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] do_upmap
Jan 31 05:27:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] pools ['backups', '.mgr', 'images', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'volumes']
Jan 31 05:27:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:27:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:27:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:27:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:27:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:27:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:27:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:27:43 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1029: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:27:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:27:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:27:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:27:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:27:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:27:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:27:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:27:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:27:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:27:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:27:45 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:27:47 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:27:48 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:27:49 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:27:49 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:27:49 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:27:49 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:27:49 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:27:49 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:27:49 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:27:49 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:27:49 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:27:49 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:27:49 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:27:49 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:27:49 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:27:49 np0005603787 podman[248657]: 2026-01-31 10:27:49.986311387 +0000 UTC m=+0.055949013 container create ed9f486c6e1ecd2e31a702ca75a81818c9e268de3894ba62b2f0c13a0473980e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 05:27:50 np0005603787 systemd[1]: Started libpod-conmon-ed9f486c6e1ecd2e31a702ca75a81818c9e268de3894ba62b2f0c13a0473980e.scope.
Jan 31 05:27:50 np0005603787 podman[248657]: 2026-01-31 10:27:49.964822896 +0000 UTC m=+0.034460502 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:27:50 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:27:50 np0005603787 podman[248657]: 2026-01-31 10:27:50.09451814 +0000 UTC m=+0.164155776 container init ed9f486c6e1ecd2e31a702ca75a81818c9e268de3894ba62b2f0c13a0473980e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_hugle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:27:50 np0005603787 podman[248657]: 2026-01-31 10:27:50.104098188 +0000 UTC m=+0.173735814 container start ed9f486c6e1ecd2e31a702ca75a81818c9e268de3894ba62b2f0c13a0473980e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_hugle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:27:50 np0005603787 podman[248657]: 2026-01-31 10:27:50.107559063 +0000 UTC m=+0.177196729 container attach ed9f486c6e1ecd2e31a702ca75a81818c9e268de3894ba62b2f0c13a0473980e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:27:50 np0005603787 dreamy_hugle[248673]: 167 167
Jan 31 05:27:50 np0005603787 systemd[1]: libpod-ed9f486c6e1ecd2e31a702ca75a81818c9e268de3894ba62b2f0c13a0473980e.scope: Deactivated successfully.
Jan 31 05:27:50 np0005603787 conmon[248673]: conmon ed9f486c6e1ecd2e31a7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ed9f486c6e1ecd2e31a702ca75a81818c9e268de3894ba62b2f0c13a0473980e.scope/container/memory.events
Jan 31 05:27:50 np0005603787 podman[248657]: 2026-01-31 10:27:50.1130004 +0000 UTC m=+0.182638026 container died ed9f486c6e1ecd2e31a702ca75a81818c9e268de3894ba62b2f0c13a0473980e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 05:27:50 np0005603787 systemd[1]: var-lib-containers-storage-overlay-2d2d26e51d4be3e2872087dc162601cdcde1675836850a15423b07a4421489d5-merged.mount: Deactivated successfully.
Jan 31 05:27:50 np0005603787 podman[248657]: 2026-01-31 10:27:50.166960817 +0000 UTC m=+0.236598433 container remove ed9f486c6e1ecd2e31a702ca75a81818c9e268de3894ba62b2f0c13a0473980e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_hugle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 05:27:50 np0005603787 systemd[1]: libpod-conmon-ed9f486c6e1ecd2e31a702ca75a81818c9e268de3894ba62b2f0c13a0473980e.scope: Deactivated successfully.
Jan 31 05:27:50 np0005603787 podman[248696]: 2026-01-31 10:27:50.357873334 +0000 UTC m=+0.063870456 container create c3d563a728d0f0761514bac2c3680cf1cc93b585bbd56226563c2ffc3365c68b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_grothendieck, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:27:50 np0005603787 systemd[1]: Started libpod-conmon-c3d563a728d0f0761514bac2c3680cf1cc93b585bbd56226563c2ffc3365c68b.scope.
Jan 31 05:27:50 np0005603787 podman[248696]: 2026-01-31 10:27:50.331181923 +0000 UTC m=+0.037179105 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:27:50 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:27:50 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/456efd9100017449186807cf3c8c1e1b079da1dfbef8bd664fe733206e5c8106/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:27:50 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/456efd9100017449186807cf3c8c1e1b079da1dfbef8bd664fe733206e5c8106/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:27:50 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/456efd9100017449186807cf3c8c1e1b079da1dfbef8bd664fe733206e5c8106/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:27:50 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/456efd9100017449186807cf3c8c1e1b079da1dfbef8bd664fe733206e5c8106/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:27:50 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/456efd9100017449186807cf3c8c1e1b079da1dfbef8bd664fe733206e5c8106/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:27:50 np0005603787 podman[248696]: 2026-01-31 10:27:50.45317235 +0000 UTC m=+0.159169452 container init c3d563a728d0f0761514bac2c3680cf1cc93b585bbd56226563c2ffc3365c68b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 05:27:50 np0005603787 podman[248696]: 2026-01-31 10:27:50.467727823 +0000 UTC m=+0.173724955 container start c3d563a728d0f0761514bac2c3680cf1cc93b585bbd56226563c2ffc3365c68b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_grothendieck, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:27:50 np0005603787 podman[248696]: 2026-01-31 10:27:50.471038432 +0000 UTC m=+0.177035524 container attach c3d563a728d0f0761514bac2c3680cf1cc93b585bbd56226563c2ffc3365c68b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_grothendieck, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:27:50 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:27:50 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:27:50 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:27:50 np0005603787 dazzling_grothendieck[248712]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:27:50 np0005603787 dazzling_grothendieck[248712]: --> All data devices are unavailable
Jan 31 05:27:50 np0005603787 systemd[1]: libpod-c3d563a728d0f0761514bac2c3680cf1cc93b585bbd56226563c2ffc3365c68b.scope: Deactivated successfully.
Jan 31 05:27:50 np0005603787 podman[248696]: 2026-01-31 10:27:50.934126533 +0000 UTC m=+0.640123635 container died c3d563a728d0f0761514bac2c3680cf1cc93b585bbd56226563c2ffc3365c68b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:27:50 np0005603787 systemd[1]: var-lib-containers-storage-overlay-456efd9100017449186807cf3c8c1e1b079da1dfbef8bd664fe733206e5c8106-merged.mount: Deactivated successfully.
Jan 31 05:27:50 np0005603787 podman[248696]: 2026-01-31 10:27:50.982955752 +0000 UTC m=+0.688952854 container remove c3d563a728d0f0761514bac2c3680cf1cc93b585bbd56226563c2ffc3365c68b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:27:50 np0005603787 systemd[1]: libpod-conmon-c3d563a728d0f0761514bac2c3680cf1cc93b585bbd56226563c2ffc3365c68b.scope: Deactivated successfully.
Jan 31 05:27:51 np0005603787 podman[248807]: 2026-01-31 10:27:51.406609367 +0000 UTC m=+0.049435906 container create 532659e45961e135194939808b5b32be5f43586db7830e4078fe19004391b81d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:27:51 np0005603787 systemd[1]: Started libpod-conmon-532659e45961e135194939808b5b32be5f43586db7830e4078fe19004391b81d.scope.
Jan 31 05:27:51 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:27:51 np0005603787 podman[248807]: 2026-01-31 10:27:51.382676521 +0000 UTC m=+0.025503110 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:27:51 np0005603787 podman[248807]: 2026-01-31 10:27:51.483296549 +0000 UTC m=+0.126123148 container init 532659e45961e135194939808b5b32be5f43586db7830e4078fe19004391b81d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_pare, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:27:51 np0005603787 podman[248807]: 2026-01-31 10:27:51.491541332 +0000 UTC m=+0.134367861 container start 532659e45961e135194939808b5b32be5f43586db7830e4078fe19004391b81d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_pare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:27:51 np0005603787 podman[248807]: 2026-01-31 10:27:51.495279753 +0000 UTC m=+0.138106342 container attach 532659e45961e135194939808b5b32be5f43586db7830e4078fe19004391b81d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_pare, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 05:27:51 np0005603787 cranky_pare[248823]: 167 167
Jan 31 05:27:51 np0005603787 systemd[1]: libpod-532659e45961e135194939808b5b32be5f43586db7830e4078fe19004391b81d.scope: Deactivated successfully.
Jan 31 05:27:51 np0005603787 podman[248807]: 2026-01-31 10:27:51.496550788 +0000 UTC m=+0.139377327 container died 532659e45961e135194939808b5b32be5f43586db7830e4078fe19004391b81d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_pare, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 05:27:51 np0005603787 systemd[1]: var-lib-containers-storage-overlay-20f1b6da59aae9e4ff99e63f8853d2f63b509d029c0c3107635d7da2ffd0b203-merged.mount: Deactivated successfully.
Jan 31 05:27:51 np0005603787 podman[248807]: 2026-01-31 10:27:51.539501737 +0000 UTC m=+0.182328276 container remove 532659e45961e135194939808b5b32be5f43586db7830e4078fe19004391b81d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_pare, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:27:51 np0005603787 systemd[1]: libpod-conmon-532659e45961e135194939808b5b32be5f43586db7830e4078fe19004391b81d.scope: Deactivated successfully.
Jan 31 05:27:51 np0005603787 podman[248847]: 2026-01-31 10:27:51.71876415 +0000 UTC m=+0.055819308 container create 23793da2b0fa63bbb2e247ac8b4a99257022415612aef99acab06365bdfc8ca1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 05:27:51 np0005603787 systemd[1]: Started libpod-conmon-23793da2b0fa63bbb2e247ac8b4a99257022415612aef99acab06365bdfc8ca1.scope.
Jan 31 05:27:51 np0005603787 podman[248847]: 2026-01-31 10:27:51.698913684 +0000 UTC m=+0.035968862 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:27:51 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:27:51 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77135a02b57f9a9bbf042d89cc7e8fec9da54f21069516a25004ed81a44796b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:27:51 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77135a02b57f9a9bbf042d89cc7e8fec9da54f21069516a25004ed81a44796b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:27:51 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77135a02b57f9a9bbf042d89cc7e8fec9da54f21069516a25004ed81a44796b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:27:51 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77135a02b57f9a9bbf042d89cc7e8fec9da54f21069516a25004ed81a44796b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:27:51 np0005603787 podman[248847]: 2026-01-31 10:27:51.821338142 +0000 UTC m=+0.158393310 container init 23793da2b0fa63bbb2e247ac8b4a99257022415612aef99acab06365bdfc8ca1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_villani, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:27:51 np0005603787 podman[248847]: 2026-01-31 10:27:51.837587131 +0000 UTC m=+0.174642279 container start 23793da2b0fa63bbb2e247ac8b4a99257022415612aef99acab06365bdfc8ca1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_villani, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 05:27:51 np0005603787 podman[248847]: 2026-01-31 10:27:51.841120426 +0000 UTC m=+0.178175624 container attach 23793da2b0fa63bbb2e247ac8b4a99257022415612aef99acab06365bdfc8ca1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_villani, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 05:27:51 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:27:52 np0005603787 exciting_villani[248863]: {
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:    "0": [
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:        {
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:            "devices": [
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "/dev/loop3"
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:            ],
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:            "lv_name": "ceph_lv0",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:            "lv_size": "21470642176",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:            "name": "ceph_lv0",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:            "tags": {
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "ceph.cluster_name": "ceph",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "ceph.crush_device_class": "",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "ceph.encrypted": "0",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "ceph.objectstore": "bluestore",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "ceph.osd_id": "0",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "ceph.type": "block",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "ceph.vdo": "0",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "ceph.with_tpm": "0"
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:            },
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:            "type": "block",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:            "vg_name": "ceph_vg0"
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:        }
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:    ],
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:    "1": [
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:        {
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:            "devices": [
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "/dev/loop4"
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:            ],
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:            "lv_name": "ceph_lv1",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:            "lv_size": "21470642176",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:            "name": "ceph_lv1",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:            "tags": {
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "ceph.cluster_name": "ceph",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "ceph.crush_device_class": "",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "ceph.encrypted": "0",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "ceph.objectstore": "bluestore",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "ceph.osd_id": "1",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "ceph.type": "block",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "ceph.vdo": "0",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "ceph.with_tpm": "0"
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:            },
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:            "type": "block",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:            "vg_name": "ceph_vg1"
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:        }
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:    ],
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:    "2": [
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:        {
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:            "devices": [
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "/dev/loop5"
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:            ],
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:            "lv_name": "ceph_lv2",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:            "lv_size": "21470642176",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:            "name": "ceph_lv2",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:            "tags": {
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "ceph.cluster_name": "ceph",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "ceph.crush_device_class": "",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "ceph.encrypted": "0",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "ceph.objectstore": "bluestore",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "ceph.osd_id": "2",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "ceph.type": "block",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "ceph.vdo": "0",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:                "ceph.with_tpm": "0"
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:            },
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:            "type": "block",
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:            "vg_name": "ceph_vg2"
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:        }
Jan 31 05:27:52 np0005603787 exciting_villani[248863]:    ]
Jan 31 05:27:52 np0005603787 exciting_villani[248863]: }
Jan 31 05:27:52 np0005603787 systemd[1]: libpod-23793da2b0fa63bbb2e247ac8b4a99257022415612aef99acab06365bdfc8ca1.scope: Deactivated successfully.
Jan 31 05:27:52 np0005603787 podman[248847]: 2026-01-31 10:27:52.14296336 +0000 UTC m=+0.480018548 container died 23793da2b0fa63bbb2e247ac8b4a99257022415612aef99acab06365bdfc8ca1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 05:27:52 np0005603787 systemd[1]: var-lib-containers-storage-overlay-77135a02b57f9a9bbf042d89cc7e8fec9da54f21069516a25004ed81a44796b1-merged.mount: Deactivated successfully.
Jan 31 05:27:52 np0005603787 podman[248847]: 2026-01-31 10:27:52.193800185 +0000 UTC m=+0.530855373 container remove 23793da2b0fa63bbb2e247ac8b4a99257022415612aef99acab06365bdfc8ca1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_villani, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True)
Jan 31 05:27:52 np0005603787 systemd[1]: libpod-conmon-23793da2b0fa63bbb2e247ac8b4a99257022415612aef99acab06365bdfc8ca1.scope: Deactivated successfully.
Jan 31 05:27:52 np0005603787 podman[248948]: 2026-01-31 10:27:52.713679019 +0000 UTC m=+0.063030954 container create df91ec624b1dd70a446d896fdcfb029edf53dc3e2942871eaae1dc81ddf89a79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 05:27:52 np0005603787 systemd[1]: Started libpod-conmon-df91ec624b1dd70a446d896fdcfb029edf53dc3e2942871eaae1dc81ddf89a79.scope.
Jan 31 05:27:52 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:27:52 np0005603787 podman[248948]: 2026-01-31 10:27:52.68743524 +0000 UTC m=+0.036787225 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:27:52 np0005603787 podman[248948]: 2026-01-31 10:27:52.79031703 +0000 UTC m=+0.139668975 container init df91ec624b1dd70a446d896fdcfb029edf53dc3e2942871eaae1dc81ddf89a79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:27:52 np0005603787 podman[248948]: 2026-01-31 10:27:52.798717627 +0000 UTC m=+0.148069522 container start df91ec624b1dd70a446d896fdcfb029edf53dc3e2942871eaae1dc81ddf89a79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_northcutt, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:27:52 np0005603787 podman[248948]: 2026-01-31 10:27:52.802118528 +0000 UTC m=+0.151470483 container attach df91ec624b1dd70a446d896fdcfb029edf53dc3e2942871eaae1dc81ddf89a79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_northcutt, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:27:52 np0005603787 distracted_northcutt[248964]: 167 167
Jan 31 05:27:52 np0005603787 systemd[1]: libpod-df91ec624b1dd70a446d896fdcfb029edf53dc3e2942871eaae1dc81ddf89a79.scope: Deactivated successfully.
Jan 31 05:27:52 np0005603787 podman[248948]: 2026-01-31 10:27:52.805027717 +0000 UTC m=+0.154379652 container died df91ec624b1dd70a446d896fdcfb029edf53dc3e2942871eaae1dc81ddf89a79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True)
Jan 31 05:27:52 np0005603787 systemd[1]: var-lib-containers-storage-overlay-65ebf8ee6d395e452215faad85a2bb6028d8d9bcd8080b82ff4676747549c1f0-merged.mount: Deactivated successfully.
Jan 31 05:27:52 np0005603787 podman[248948]: 2026-01-31 10:27:52.848559743 +0000 UTC m=+0.197911648 container remove df91ec624b1dd70a446d896fdcfb029edf53dc3e2942871eaae1dc81ddf89a79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_northcutt, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 05:27:52 np0005603787 systemd[1]: libpod-conmon-df91ec624b1dd70a446d896fdcfb029edf53dc3e2942871eaae1dc81ddf89a79.scope: Deactivated successfully.
Jan 31 05:27:53 np0005603787 podman[248986]: 2026-01-31 10:27:53.058210387 +0000 UTC m=+0.060951977 container create bf8ec50a2a5e1a76fcb3c20477858bbbbd3402eec2fc5c82ffdb84ca3b5c3d63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_kirch, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 05:27:53 np0005603787 systemd[1]: Started libpod-conmon-bf8ec50a2a5e1a76fcb3c20477858bbbbd3402eec2fc5c82ffdb84ca3b5c3d63.scope.
Jan 31 05:27:53 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:27:53 np0005603787 podman[248986]: 2026-01-31 10:27:53.031631409 +0000 UTC m=+0.034373059 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:27:53 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:27:53 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9cea99d8e81c23b1b8a97a9931318cc51e899daba209a361d5e23b21e5780ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:27:53 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9cea99d8e81c23b1b8a97a9931318cc51e899daba209a361d5e23b21e5780ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:27:53 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9cea99d8e81c23b1b8a97a9931318cc51e899daba209a361d5e23b21e5780ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:27:53 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9cea99d8e81c23b1b8a97a9931318cc51e899daba209a361d5e23b21e5780ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:27:53 np0005603787 podman[248986]: 2026-01-31 10:27:53.152523765 +0000 UTC m=+0.155265405 container init bf8ec50a2a5e1a76fcb3c20477858bbbbd3402eec2fc5c82ffdb84ca3b5c3d63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_kirch, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:27:53 np0005603787 podman[248986]: 2026-01-31 10:27:53.167102179 +0000 UTC m=+0.169843769 container start bf8ec50a2a5e1a76fcb3c20477858bbbbd3402eec2fc5c82ffdb84ca3b5c3d63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_kirch, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:27:53 np0005603787 podman[248986]: 2026-01-31 10:27:53.171278032 +0000 UTC m=+0.174019632 container attach bf8ec50a2a5e1a76fcb3c20477858bbbbd3402eec2fc5c82ffdb84ca3b5c3d63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:27:53 np0005603787 lvm[249083]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:27:53 np0005603787 lvm[249083]: VG ceph_vg1 finished
Jan 31 05:27:53 np0005603787 lvm[249082]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:27:53 np0005603787 lvm[249082]: VG ceph_vg0 finished
Jan 31 05:27:53 np0005603787 lvm[249085]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:27:53 np0005603787 lvm[249085]: VG ceph_vg2 finished
Jan 31 05:27:53 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:27:53 np0005603787 funny_kirch[249003]: {}
Jan 31 05:27:53 np0005603787 systemd[1]: libpod-bf8ec50a2a5e1a76fcb3c20477858bbbbd3402eec2fc5c82ffdb84ca3b5c3d63.scope: Deactivated successfully.
Jan 31 05:27:53 np0005603787 systemd[1]: libpod-bf8ec50a2a5e1a76fcb3c20477858bbbbd3402eec2fc5c82ffdb84ca3b5c3d63.scope: Consumed 1.240s CPU time.
Jan 31 05:27:53 np0005603787 podman[248986]: 2026-01-31 10:27:53.976953708 +0000 UTC m=+0.979695318 container died bf8ec50a2a5e1a76fcb3c20477858bbbbd3402eec2fc5c82ffdb84ca3b5c3d63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_kirch, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 05:27:54 np0005603787 systemd[1]: var-lib-containers-storage-overlay-b9cea99d8e81c23b1b8a97a9931318cc51e899daba209a361d5e23b21e5780ce-merged.mount: Deactivated successfully.
Jan 31 05:27:54 np0005603787 podman[248986]: 2026-01-31 10:27:54.026436565 +0000 UTC m=+1.029178155 container remove bf8ec50a2a5e1a76fcb3c20477858bbbbd3402eec2fc5c82ffdb84ca3b5c3d63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_kirch, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:27:54 np0005603787 systemd[1]: libpod-conmon-bf8ec50a2a5e1a76fcb3c20477858bbbbd3402eec2fc5c82ffdb84ca3b5c3d63.scope: Deactivated successfully.
Jan 31 05:27:54 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:27:54 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:27:54 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:27:54 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:27:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:27:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:27:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:27:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:27:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:27:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:27:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:27:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:27:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:27:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:27:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 2.450943614167069e-07 of space, bias 1.0, pg target 7.352830842501207e-05 quantized to 32 (current 32)
Jan 31 05:27:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:27:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.527403468629877e-06 of space, bias 4.0, pg target 0.0018328841623558524 quantized to 16 (current 16)
Jan 31 05:27:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:27:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:27:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:27:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:27:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:27:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 05:27:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:27:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:27:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:27:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:27:55 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:27:55 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:27:55 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:27:57 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:27:58 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:27:59 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:28:01 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:28:03 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:28:03 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:28:05 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:28:07 np0005603787 nova_compute[238603]: 2026-01-31 10:28:07.101 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:28:07 np0005603787 nova_compute[238603]: 2026-01-31 10:28:07.103 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:28:07 np0005603787 nova_compute[238603]: 2026-01-31 10:28:07.103 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:28:07 np0005603787 nova_compute[238603]: 2026-01-31 10:28:07.103 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:28:07 np0005603787 nova_compute[238603]: 2026-01-31 10:28:07.104 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 05:28:07 np0005603787 podman[249126]: 2026-01-31 10:28:07.852400735 +0000 UTC m=+0.067327060 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Jan 31 05:28:07 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:28:07 np0005603787 podman[249125]: 2026-01-31 10:28:07.929875578 +0000 UTC m=+0.146874069 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 05:28:08 np0005603787 nova_compute[238603]: 2026-01-31 10:28:08.103 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:28:08 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:28:09 np0005603787 nova_compute[238603]: 2026-01-31 10:28:09.097 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:28:09 np0005603787 nova_compute[238603]: 2026-01-31 10:28:09.101 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:28:09 np0005603787 nova_compute[238603]: 2026-01-31 10:28:09.102 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 05:28:09 np0005603787 nova_compute[238603]: 2026-01-31 10:28:09.102 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 05:28:09 np0005603787 nova_compute[238603]: 2026-01-31 10:28:09.127 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 05:28:09 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:28:11 np0005603787 nova_compute[238603]: 2026-01-31 10:28:11.101 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:28:11 np0005603787 nova_compute[238603]: 2026-01-31 10:28:11.139 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:28:11 np0005603787 nova_compute[238603]: 2026-01-31 10:28:11.140 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:28:11 np0005603787 nova_compute[238603]: 2026-01-31 10:28:11.140 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:28:11 np0005603787 nova_compute[238603]: 2026-01-31 10:28:11.140 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 05:28:11 np0005603787 nova_compute[238603]: 2026-01-31 10:28:11.141 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:28:11 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:28:11 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/97586117' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:28:11 np0005603787 nova_compute[238603]: 2026-01-31 10:28:11.698 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:28:11 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:28:11 np0005603787 nova_compute[238603]: 2026-01-31 10:28:11.896 238607 WARNING nova.virt.libvirt.driver [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 05:28:11 np0005603787 nova_compute[238603]: 2026-01-31 10:28:11.898 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5102MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 05:28:11 np0005603787 nova_compute[238603]: 2026-01-31 10:28:11.898 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:28:11 np0005603787 nova_compute[238603]: 2026-01-31 10:28:11.898 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:28:11 np0005603787 nova_compute[238603]: 2026-01-31 10:28:11.955 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 05:28:11 np0005603787 nova_compute[238603]: 2026-01-31 10:28:11.956 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 05:28:11 np0005603787 nova_compute[238603]: 2026-01-31 10:28:11.981 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:28:12 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:28:12 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4067521328' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:28:12 np0005603787 nova_compute[238603]: 2026-01-31 10:28:12.543 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:28:12 np0005603787 nova_compute[238603]: 2026-01-31 10:28:12.548 238607 DEBUG nova.compute.provider_tree [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed in ProviderTree for provider: 207962d2-1ba9-4db2-8533-2a30e7131f3e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 05:28:12 np0005603787 nova_compute[238603]: 2026-01-31 10:28:12.566 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed for provider 207962d2-1ba9-4db2-8533-2a30e7131f3e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 05:28:12 np0005603787 nova_compute[238603]: 2026-01-31 10:28:12.568 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 05:28:12 np0005603787 nova_compute[238603]: 2026-01-31 10:28:12.568 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:28:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:28:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:28:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:28:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:28:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:28:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:28:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:28:13 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:28:15 np0005603787 nova_compute[238603]: 2026-01-31 10:28:15.565 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:28:15 np0005603787 nova_compute[238603]: 2026-01-31 10:28:15.607 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:28:15 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:28:17 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:28:18 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:28:19 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:28:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 05:28:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3390238917' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 05:28:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 05:28:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3390238917' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 05:28:21 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:28:23 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:28:23 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:28:24 np0005603787 ceph-osd[85879]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 05:28:24 np0005603787 ceph-osd[85879]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 6392 writes, 26K keys, 6392 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6392 writes, 1258 syncs, 5.08 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 525 writes, 1485 keys, 525 commit groups, 1.0 writes per commit group, ingest: 0.72 MB, 0.00 MB/s#012Interval WAL: 525 writes, 243 syncs, 2.16 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 05:28:25 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1050: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:28:27 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:28:28 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:28:29 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:28:30 np0005603787 ceph-osd[86934]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 05:28:30 np0005603787 ceph-osd[86934]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1802.2 total, 600.0 interval#012Cumulative writes: 7664 writes, 30K keys, 7664 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7664 writes, 1654 syncs, 4.63 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 543 writes, 1415 keys, 543 commit groups, 1.0 writes per commit group, ingest: 0.73 MB, 0.00 MB/s#012Interval WAL: 543 writes, 244 syncs, 2.23 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 05:28:31 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1053: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:28:33 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:28:33 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:28:35 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:28:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:28:37.070 154765 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:28:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:28:37.071 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:28:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:28:37.071 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:28:37 np0005603787 ceph-osd[87996]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 05:28:37 np0005603787 ceph-osd[87996]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.3 total, 600.0 interval#012Cumulative writes: 6308 writes, 25K keys, 6308 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6308 writes, 1195 syncs, 5.28 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 655 writes, 1714 keys, 655 commit groups, 1.0 writes per commit group, ingest: 0.87 MB, 0.00 MB/s#012Interval WAL: 655 writes, 298 syncs, 2.20 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 05:28:37 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:28:38 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:28:38 np0005603787 podman[249213]: 2026-01-31 10:28:38.878271991 +0000 UTC m=+0.090909117 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 05:28:38 np0005603787 podman[249214]: 2026-01-31 10:28:38.882406942 +0000 UTC m=+0.089894659 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127)
Jan 31 05:28:39 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:28:41 np0005603787 ceph-mgr[75453]: [devicehealth INFO root] Check health
Jan 31 05:28:41 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1058: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:28:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:28:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_10:28:43
Jan 31 05:28:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:28:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] do_upmap
Jan 31 05:28:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'images', '.rgw.root', 'default.rgw.control', '.mgr', 'volumes', 'default.rgw.meta', 'default.rgw.log']
Jan 31 05:28:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:28:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:28:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:28:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:28:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:28:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:28:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:28:43 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:28:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:28:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:28:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:28:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:28:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:28:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:28:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:28:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:28:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:28:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:28:45 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1060: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:28:47 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:28:48 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:28:49 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:28:51 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1063: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:28:53 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:28:53 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:28:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:28:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:28:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:28:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:28:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:28:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:28:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:28:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:28:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:28:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:28:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 2.450943614167069e-07 of space, bias 1.0, pg target 7.352830842501207e-05 quantized to 32 (current 32)
Jan 31 05:28:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:28:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.527403468629877e-06 of space, bias 4.0, pg target 0.0018328841623558524 quantized to 16 (current 16)
Jan 31 05:28:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:28:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:28:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:28:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:28:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:28:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 05:28:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:28:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:28:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:28:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:28:55 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:28:55 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:28:55 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:28:55 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:28:55 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:28:55 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:28:55 np0005603787 podman[249471]: 2026-01-31 10:28:55.662749169 +0000 UTC m=+0.069132709 container create 73648ecacfbcf6887ce4c0c9383296bc5130c3c110f7e2f8fc4b362abd7b385d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:28:55 np0005603787 systemd[1]: Started libpod-conmon-73648ecacfbcf6887ce4c0c9383296bc5130c3c110f7e2f8fc4b362abd7b385d.scope.
Jan 31 05:28:55 np0005603787 podman[249471]: 2026-01-31 10:28:55.634972478 +0000 UTC m=+0.041356028 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:28:55 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:28:55 np0005603787 podman[249471]: 2026-01-31 10:28:55.756926023 +0000 UTC m=+0.163309633 container init 73648ecacfbcf6887ce4c0c9383296bc5130c3c110f7e2f8fc4b362abd7b385d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_blackwell, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 05:28:55 np0005603787 podman[249471]: 2026-01-31 10:28:55.768730062 +0000 UTC m=+0.175113602 container start 73648ecacfbcf6887ce4c0c9383296bc5130c3c110f7e2f8fc4b362abd7b385d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:28:55 np0005603787 podman[249471]: 2026-01-31 10:28:55.772426731 +0000 UTC m=+0.178810281 container attach 73648ecacfbcf6887ce4c0c9383296bc5130c3c110f7e2f8fc4b362abd7b385d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True)
Jan 31 05:28:55 np0005603787 keen_blackwell[249488]: 167 167
Jan 31 05:28:55 np0005603787 systemd[1]: libpod-73648ecacfbcf6887ce4c0c9383296bc5130c3c110f7e2f8fc4b362abd7b385d.scope: Deactivated successfully.
Jan 31 05:28:55 np0005603787 podman[249471]: 2026-01-31 10:28:55.776010659 +0000 UTC m=+0.182394169 container died 73648ecacfbcf6887ce4c0c9383296bc5130c3c110f7e2f8fc4b362abd7b385d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 05:28:55 np0005603787 systemd[1]: var-lib-containers-storage-overlay-8165a301387d9888f38813d3b325cb3e8b35de38cf3b9243dafcddc4856b2376-merged.mount: Deactivated successfully.
Jan 31 05:28:55 np0005603787 podman[249471]: 2026-01-31 10:28:55.823459161 +0000 UTC m=+0.229842671 container remove 73648ecacfbcf6887ce4c0c9383296bc5130c3c110f7e2f8fc4b362abd7b385d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_blackwell, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:28:55 np0005603787 systemd[1]: libpod-conmon-73648ecacfbcf6887ce4c0c9383296bc5130c3c110f7e2f8fc4b362abd7b385d.scope: Deactivated successfully.
Jan 31 05:28:55 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:28:56 np0005603787 podman[249512]: 2026-01-31 10:28:56.015395126 +0000 UTC m=+0.063553169 container create 4d20e694f23490e92c5794f81c93e9e956ae56ec7e70845e3175bd7d0bd414d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 05:28:56 np0005603787 systemd[1]: Started libpod-conmon-4d20e694f23490e92c5794f81c93e9e956ae56ec7e70845e3175bd7d0bd414d3.scope.
Jan 31 05:28:56 np0005603787 podman[249512]: 2026-01-31 10:28:55.98964674 +0000 UTC m=+0.037804883 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:28:56 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:28:56 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51b79252243f02e4d5774abdd88418de9fe019f866085cba11300924a0bf005d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:28:56 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51b79252243f02e4d5774abdd88418de9fe019f866085cba11300924a0bf005d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:28:56 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51b79252243f02e4d5774abdd88418de9fe019f866085cba11300924a0bf005d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:28:56 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51b79252243f02e4d5774abdd88418de9fe019f866085cba11300924a0bf005d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:28:56 np0005603787 podman[249512]: 2026-01-31 10:28:56.115336105 +0000 UTC m=+0.163494198 container init 4d20e694f23490e92c5794f81c93e9e956ae56ec7e70845e3175bd7d0bd414d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_poitras, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:28:56 np0005603787 podman[249512]: 2026-01-31 10:28:56.129019515 +0000 UTC m=+0.177177568 container start 4d20e694f23490e92c5794f81c93e9e956ae56ec7e70845e3175bd7d0bd414d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_poitras, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 05:28:56 np0005603787 podman[249512]: 2026-01-31 10:28:56.135323336 +0000 UTC m=+0.183481479 container attach 4d20e694f23490e92c5794f81c93e9e956ae56ec7e70845e3175bd7d0bd414d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:28:56 np0005603787 gifted_poitras[249528]: [
Jan 31 05:28:56 np0005603787 gifted_poitras[249528]:    {
Jan 31 05:28:56 np0005603787 gifted_poitras[249528]:        "available": false,
Jan 31 05:28:56 np0005603787 gifted_poitras[249528]:        "being_replaced": false,
Jan 31 05:28:56 np0005603787 gifted_poitras[249528]:        "ceph_device_lvm": false,
Jan 31 05:28:56 np0005603787 gifted_poitras[249528]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 05:28:56 np0005603787 gifted_poitras[249528]:        "lsm_data": {},
Jan 31 05:28:56 np0005603787 gifted_poitras[249528]:        "lvs": [],
Jan 31 05:28:56 np0005603787 gifted_poitras[249528]:        "path": "/dev/sr0",
Jan 31 05:28:56 np0005603787 gifted_poitras[249528]:        "rejected_reasons": [
Jan 31 05:28:56 np0005603787 gifted_poitras[249528]:            "Insufficient space (<5GB)",
Jan 31 05:28:56 np0005603787 gifted_poitras[249528]:            "Has a FileSystem"
Jan 31 05:28:56 np0005603787 gifted_poitras[249528]:        ],
Jan 31 05:28:56 np0005603787 gifted_poitras[249528]:        "sys_api": {
Jan 31 05:28:56 np0005603787 gifted_poitras[249528]:            "actuators": null,
Jan 31 05:28:56 np0005603787 gifted_poitras[249528]:            "device_nodes": [
Jan 31 05:28:56 np0005603787 gifted_poitras[249528]:                "sr0"
Jan 31 05:28:56 np0005603787 gifted_poitras[249528]:            ],
Jan 31 05:28:56 np0005603787 gifted_poitras[249528]:            "devname": "sr0",
Jan 31 05:28:56 np0005603787 gifted_poitras[249528]:            "human_readable_size": "482.00 KB",
Jan 31 05:28:56 np0005603787 gifted_poitras[249528]:            "id_bus": "ata",
Jan 31 05:28:56 np0005603787 gifted_poitras[249528]:            "model": "QEMU DVD-ROM",
Jan 31 05:28:56 np0005603787 gifted_poitras[249528]:            "nr_requests": "2",
Jan 31 05:28:56 np0005603787 gifted_poitras[249528]:            "parent": "/dev/sr0",
Jan 31 05:28:56 np0005603787 gifted_poitras[249528]:            "partitions": {},
Jan 31 05:28:56 np0005603787 gifted_poitras[249528]:            "path": "/dev/sr0",
Jan 31 05:28:56 np0005603787 gifted_poitras[249528]:            "removable": "1",
Jan 31 05:28:56 np0005603787 gifted_poitras[249528]:            "rev": "2.5+",
Jan 31 05:28:56 np0005603787 gifted_poitras[249528]:            "ro": "0",
Jan 31 05:28:56 np0005603787 gifted_poitras[249528]:            "rotational": "1",
Jan 31 05:28:56 np0005603787 gifted_poitras[249528]:            "sas_address": "",
Jan 31 05:28:56 np0005603787 gifted_poitras[249528]:            "sas_device_handle": "",
Jan 31 05:28:56 np0005603787 gifted_poitras[249528]:            "scheduler_mode": "mq-deadline",
Jan 31 05:28:56 np0005603787 gifted_poitras[249528]:            "sectors": 0,
Jan 31 05:28:56 np0005603787 gifted_poitras[249528]:            "sectorsize": "2048",
Jan 31 05:28:56 np0005603787 gifted_poitras[249528]:            "size": 493568.0,
Jan 31 05:28:56 np0005603787 gifted_poitras[249528]:            "support_discard": "2048",
Jan 31 05:28:56 np0005603787 gifted_poitras[249528]:            "type": "disk",
Jan 31 05:28:56 np0005603787 gifted_poitras[249528]:            "vendor": "QEMU"
Jan 31 05:28:56 np0005603787 gifted_poitras[249528]:        }
Jan 31 05:28:56 np0005603787 gifted_poitras[249528]:    }
Jan 31 05:28:56 np0005603787 gifted_poitras[249528]: ]
Jan 31 05:28:56 np0005603787 systemd[1]: libpod-4d20e694f23490e92c5794f81c93e9e956ae56ec7e70845e3175bd7d0bd414d3.scope: Deactivated successfully.
Jan 31 05:28:56 np0005603787 podman[249512]: 2026-01-31 10:28:56.704094722 +0000 UTC m=+0.752252775 container died 4d20e694f23490e92c5794f81c93e9e956ae56ec7e70845e3175bd7d0bd414d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:28:56 np0005603787 systemd[1]: var-lib-containers-storage-overlay-51b79252243f02e4d5774abdd88418de9fe019f866085cba11300924a0bf005d-merged.mount: Deactivated successfully.
Jan 31 05:28:56 np0005603787 podman[249512]: 2026-01-31 10:28:56.756457346 +0000 UTC m=+0.804615419 container remove 4d20e694f23490e92c5794f81c93e9e956ae56ec7e70845e3175bd7d0bd414d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_poitras, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 05:28:56 np0005603787 systemd[1]: libpod-conmon-4d20e694f23490e92c5794f81c93e9e956ae56ec7e70845e3175bd7d0bd414d3.scope: Deactivated successfully.
Jan 31 05:28:56 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:28:56 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:28:56 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:28:56 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:28:56 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:28:56 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:28:56 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:28:56 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:28:56 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:28:56 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:28:56 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:28:56 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:28:56 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:28:56 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:28:56 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:28:56 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:28:57 np0005603787 podman[250458]: 2026-01-31 10:28:57.325685235 +0000 UTC m=+0.062324625 container create 18bbfbbded89cb0cf206fea8f84a405316f814ad65ed1f11fd6a109ef02ec465 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_diffie, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 05:28:57 np0005603787 systemd[1]: Started libpod-conmon-18bbfbbded89cb0cf206fea8f84a405316f814ad65ed1f11fd6a109ef02ec465.scope.
Jan 31 05:28:57 np0005603787 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 05:28:57 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:28:57 np0005603787 podman[250458]: 2026-01-31 10:28:57.298442338 +0000 UTC m=+0.035081788 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:28:57 np0005603787 podman[250458]: 2026-01-31 10:28:57.401247836 +0000 UTC m=+0.137887196 container init 18bbfbbded89cb0cf206fea8f84a405316f814ad65ed1f11fd6a109ef02ec465 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 05:28:57 np0005603787 podman[250458]: 2026-01-31 10:28:57.410193968 +0000 UTC m=+0.146833318 container start 18bbfbbded89cb0cf206fea8f84a405316f814ad65ed1f11fd6a109ef02ec465 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 05:28:57 np0005603787 podman[250458]: 2026-01-31 10:28:57.413420094 +0000 UTC m=+0.150059444 container attach 18bbfbbded89cb0cf206fea8f84a405316f814ad65ed1f11fd6a109ef02ec465 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:28:57 np0005603787 zen_diffie[250475]: 167 167
Jan 31 05:28:57 np0005603787 systemd[1]: libpod-18bbfbbded89cb0cf206fea8f84a405316f814ad65ed1f11fd6a109ef02ec465.scope: Deactivated successfully.
Jan 31 05:28:57 np0005603787 podman[250458]: 2026-01-31 10:28:57.416612071 +0000 UTC m=+0.153251461 container died 18bbfbbded89cb0cf206fea8f84a405316f814ad65ed1f11fd6a109ef02ec465 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_diffie, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 05:28:57 np0005603787 systemd[1]: var-lib-containers-storage-overlay-84a930a0fb2cd845f6367b24b8d806819f33937c063425bd4931c7577c8047a6-merged.mount: Deactivated successfully.
Jan 31 05:28:57 np0005603787 podman[250458]: 2026-01-31 10:28:57.453447566 +0000 UTC m=+0.190086916 container remove 18bbfbbded89cb0cf206fea8f84a405316f814ad65ed1f11fd6a109ef02ec465 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_diffie, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 05:28:57 np0005603787 systemd[1]: libpod-conmon-18bbfbbded89cb0cf206fea8f84a405316f814ad65ed1f11fd6a109ef02ec465.scope: Deactivated successfully.
Jan 31 05:28:57 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:28:57 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:28:57 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:28:57 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:28:57 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:28:57 np0005603787 podman[250499]: 2026-01-31 10:28:57.651909068 +0000 UTC m=+0.064229636 container create c326c224bd9cdc61c382c694aeebd0753dd4eefd9a5beef435367708036ce1a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_pike, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:28:57 np0005603787 systemd[1]: Started libpod-conmon-c326c224bd9cdc61c382c694aeebd0753dd4eefd9a5beef435367708036ce1a7.scope.
Jan 31 05:28:57 np0005603787 podman[250499]: 2026-01-31 10:28:57.626263775 +0000 UTC m=+0.038584393 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:28:57 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:28:57 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1e533709c51c6f35443dc834475d470dceeb85f1e976183d69c38144628a7c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:28:57 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1e533709c51c6f35443dc834475d470dceeb85f1e976183d69c38144628a7c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:28:57 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1e533709c51c6f35443dc834475d470dceeb85f1e976183d69c38144628a7c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:28:57 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1e533709c51c6f35443dc834475d470dceeb85f1e976183d69c38144628a7c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:28:57 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1e533709c51c6f35443dc834475d470dceeb85f1e976183d69c38144628a7c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:28:57 np0005603787 podman[250499]: 2026-01-31 10:28:57.743216965 +0000 UTC m=+0.155537513 container init c326c224bd9cdc61c382c694aeebd0753dd4eefd9a5beef435367708036ce1a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_pike, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:28:57 np0005603787 podman[250499]: 2026-01-31 10:28:57.750784129 +0000 UTC m=+0.163104667 container start c326c224bd9cdc61c382c694aeebd0753dd4eefd9a5beef435367708036ce1a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:28:57 np0005603787 podman[250499]: 2026-01-31 10:28:57.754271983 +0000 UTC m=+0.166592551 container attach c326c224bd9cdc61c382c694aeebd0753dd4eefd9a5beef435367708036ce1a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 05:28:57 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:28:58 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:28:58 np0005603787 amazing_pike[250515]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:28:58 np0005603787 amazing_pike[250515]: --> All data devices are unavailable
Jan 31 05:28:58 np0005603787 systemd[1]: libpod-c326c224bd9cdc61c382c694aeebd0753dd4eefd9a5beef435367708036ce1a7.scope: Deactivated successfully.
Jan 31 05:28:58 np0005603787 podman[250499]: 2026-01-31 10:28:58.209210184 +0000 UTC m=+0.621530762 container died c326c224bd9cdc61c382c694aeebd0753dd4eefd9a5beef435367708036ce1a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:28:58 np0005603787 systemd[1]: var-lib-containers-storage-overlay-ff1e533709c51c6f35443dc834475d470dceeb85f1e976183d69c38144628a7c-merged.mount: Deactivated successfully.
Jan 31 05:28:58 np0005603787 podman[250499]: 2026-01-31 10:28:58.262643138 +0000 UTC m=+0.674963716 container remove c326c224bd9cdc61c382c694aeebd0753dd4eefd9a5beef435367708036ce1a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 05:28:58 np0005603787 systemd[1]: libpod-conmon-c326c224bd9cdc61c382c694aeebd0753dd4eefd9a5beef435367708036ce1a7.scope: Deactivated successfully.
Jan 31 05:28:58 np0005603787 podman[250610]: 2026-01-31 10:28:58.678710828 +0000 UTC m=+0.046266661 container create c8c2c6c9e64f27e8482bbd1a83efb0e553841d88afe97d91c3168feb6ec16573 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_goldberg, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 05:28:58 np0005603787 systemd[1]: Started libpod-conmon-c8c2c6c9e64f27e8482bbd1a83efb0e553841d88afe97d91c3168feb6ec16573.scope.
Jan 31 05:28:58 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:28:58 np0005603787 podman[250610]: 2026-01-31 10:28:58.660233659 +0000 UTC m=+0.027789572 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:28:58 np0005603787 podman[250610]: 2026-01-31 10:28:58.761608147 +0000 UTC m=+0.129164000 container init c8c2c6c9e64f27e8482bbd1a83efb0e553841d88afe97d91c3168feb6ec16573 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_goldberg, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:28:58 np0005603787 podman[250610]: 2026-01-31 10:28:58.769354267 +0000 UTC m=+0.136910090 container start c8c2c6c9e64f27e8482bbd1a83efb0e553841d88afe97d91c3168feb6ec16573 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_goldberg, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 05:28:58 np0005603787 podman[250610]: 2026-01-31 10:28:58.772768209 +0000 UTC m=+0.140324032 container attach c8c2c6c9e64f27e8482bbd1a83efb0e553841d88afe97d91c3168feb6ec16573 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_goldberg, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:28:58 np0005603787 confident_goldberg[250625]: 167 167
Jan 31 05:28:58 np0005603787 systemd[1]: libpod-c8c2c6c9e64f27e8482bbd1a83efb0e553841d88afe97d91c3168feb6ec16573.scope: Deactivated successfully.
Jan 31 05:28:58 np0005603787 podman[250610]: 2026-01-31 10:28:58.774731192 +0000 UTC m=+0.142287025 container died c8c2c6c9e64f27e8482bbd1a83efb0e553841d88afe97d91c3168feb6ec16573 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 05:28:58 np0005603787 systemd[1]: var-lib-containers-storage-overlay-85b0978aa61ee08f5a3c04bbb4b353680bbd3cf225193b1fd0e93c46bba8573e-merged.mount: Deactivated successfully.
Jan 31 05:28:58 np0005603787 podman[250610]: 2026-01-31 10:28:58.808241198 +0000 UTC m=+0.175797021 container remove c8c2c6c9e64f27e8482bbd1a83efb0e553841d88afe97d91c3168feb6ec16573 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 05:28:58 np0005603787 systemd[1]: libpod-conmon-c8c2c6c9e64f27e8482bbd1a83efb0e553841d88afe97d91c3168feb6ec16573.scope: Deactivated successfully.
Jan 31 05:28:58 np0005603787 podman[250650]: 2026-01-31 10:28:58.974699844 +0000 UTC m=+0.054392650 container create 2120c6ed655f16ddfdd9e5ab63d42f7b04386b9046b9cd05d51b970b0e73dc8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 05:28:59 np0005603787 systemd[1]: Started libpod-conmon-2120c6ed655f16ddfdd9e5ab63d42f7b04386b9046b9cd05d51b970b0e73dc8f.scope.
Jan 31 05:28:59 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:28:59 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/017306cefafb1f580ee5ca05da379ceab90b70305c6f30050e1b6c7efa02cd26/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:28:59 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/017306cefafb1f580ee5ca05da379ceab90b70305c6f30050e1b6c7efa02cd26/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:28:59 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/017306cefafb1f580ee5ca05da379ceab90b70305c6f30050e1b6c7efa02cd26/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:28:59 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/017306cefafb1f580ee5ca05da379ceab90b70305c6f30050e1b6c7efa02cd26/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:28:59 np0005603787 podman[250650]: 2026-01-31 10:28:59.034926731 +0000 UTC m=+0.114619547 container init 2120c6ed655f16ddfdd9e5ab63d42f7b04386b9046b9cd05d51b970b0e73dc8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_brahmagupta, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:28:59 np0005603787 podman[250650]: 2026-01-31 10:28:59.040299196 +0000 UTC m=+0.119991992 container start 2120c6ed655f16ddfdd9e5ab63d42f7b04386b9046b9cd05d51b970b0e73dc8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_brahmagupta, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 05:28:59 np0005603787 podman[250650]: 2026-01-31 10:28:59.043722569 +0000 UTC m=+0.123415375 container attach 2120c6ed655f16ddfdd9e5ab63d42f7b04386b9046b9cd05d51b970b0e73dc8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_brahmagupta, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:28:59 np0005603787 podman[250650]: 2026-01-31 10:28:58.951422006 +0000 UTC m=+0.031114862 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]: {
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:    "0": [
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:        {
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:            "devices": [
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "/dev/loop3"
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:            ],
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:            "lv_name": "ceph_lv0",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:            "lv_size": "21470642176",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:            "name": "ceph_lv0",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:            "tags": {
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "ceph.cluster_name": "ceph",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "ceph.crush_device_class": "",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "ceph.encrypted": "0",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "ceph.objectstore": "bluestore",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "ceph.osd_id": "0",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "ceph.type": "block",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "ceph.vdo": "0",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "ceph.with_tpm": "0"
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:            },
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:            "type": "block",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:            "vg_name": "ceph_vg0"
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:        }
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:    ],
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:    "1": [
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:        {
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:            "devices": [
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "/dev/loop4"
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:            ],
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:            "lv_name": "ceph_lv1",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:            "lv_size": "21470642176",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:            "name": "ceph_lv1",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:            "tags": {
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "ceph.cluster_name": "ceph",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "ceph.crush_device_class": "",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "ceph.encrypted": "0",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "ceph.objectstore": "bluestore",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "ceph.osd_id": "1",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "ceph.type": "block",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "ceph.vdo": "0",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "ceph.with_tpm": "0"
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:            },
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:            "type": "block",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:            "vg_name": "ceph_vg1"
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:        }
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:    ],
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:    "2": [
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:        {
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:            "devices": [
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "/dev/loop5"
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:            ],
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:            "lv_name": "ceph_lv2",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:            "lv_size": "21470642176",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:            "name": "ceph_lv2",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:            "tags": {
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "ceph.cluster_name": "ceph",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "ceph.crush_device_class": "",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "ceph.encrypted": "0",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "ceph.objectstore": "bluestore",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "ceph.osd_id": "2",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "ceph.type": "block",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "ceph.vdo": "0",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:                "ceph.with_tpm": "0"
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:            },
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:            "type": "block",
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:            "vg_name": "ceph_vg2"
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:        }
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]:    ]
Jan 31 05:28:59 np0005603787 amazing_brahmagupta[250667]: }
Jan 31 05:28:59 np0005603787 systemd[1]: libpod-2120c6ed655f16ddfdd9e5ab63d42f7b04386b9046b9cd05d51b970b0e73dc8f.scope: Deactivated successfully.
Jan 31 05:28:59 np0005603787 podman[250650]: 2026-01-31 10:28:59.307751742 +0000 UTC m=+0.387444588 container died 2120c6ed655f16ddfdd9e5ab63d42f7b04386b9046b9cd05d51b970b0e73dc8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_brahmagupta, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 05:28:59 np0005603787 systemd[1]: var-lib-containers-storage-overlay-017306cefafb1f580ee5ca05da379ceab90b70305c6f30050e1b6c7efa02cd26-merged.mount: Deactivated successfully.
Jan 31 05:28:59 np0005603787 podman[250650]: 2026-01-31 10:28:59.353302503 +0000 UTC m=+0.432995349 container remove 2120c6ed655f16ddfdd9e5ab63d42f7b04386b9046b9cd05d51b970b0e73dc8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 05:28:59 np0005603787 systemd[1]: libpod-conmon-2120c6ed655f16ddfdd9e5ab63d42f7b04386b9046b9cd05d51b970b0e73dc8f.scope: Deactivated successfully.
Jan 31 05:28:59 np0005603787 podman[250748]: 2026-01-31 10:28:59.827486034 +0000 UTC m=+0.062710976 container create 48c11b1751a466220db4eb86c713a6a459f8962c9c8101470a4f85a8d4ea286c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:28:59 np0005603787 systemd[1]: Started libpod-conmon-48c11b1751a466220db4eb86c713a6a459f8962c9c8101470a4f85a8d4ea286c.scope.
Jan 31 05:28:59 np0005603787 podman[250748]: 2026-01-31 10:28:59.803239648 +0000 UTC m=+0.038464640 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:28:59 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1067: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:28:59 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:28:59 np0005603787 podman[250748]: 2026-01-31 10:28:59.926099848 +0000 UTC m=+0.161324800 container init 48c11b1751a466220db4eb86c713a6a459f8962c9c8101470a4f85a8d4ea286c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:28:59 np0005603787 podman[250748]: 2026-01-31 10:28:59.935047589 +0000 UTC m=+0.170272531 container start 48c11b1751a466220db4eb86c713a6a459f8962c9c8101470a4f85a8d4ea286c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:28:59 np0005603787 podman[250748]: 2026-01-31 10:28:59.93914275 +0000 UTC m=+0.174367692 container attach 48c11b1751a466220db4eb86c713a6a459f8962c9c8101470a4f85a8d4ea286c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_noyce, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:28:59 np0005603787 jolly_noyce[250764]: 167 167
Jan 31 05:28:59 np0005603787 systemd[1]: libpod-48c11b1751a466220db4eb86c713a6a459f8962c9c8101470a4f85a8d4ea286c.scope: Deactivated successfully.
Jan 31 05:28:59 np0005603787 podman[250748]: 2026-01-31 10:28:59.940919468 +0000 UTC m=+0.176144400 container died 48c11b1751a466220db4eb86c713a6a459f8962c9c8101470a4f85a8d4ea286c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 05:28:59 np0005603787 systemd[1]: var-lib-containers-storage-overlay-58d17c446e88cedbfcb2f6b4286111ebab8fd54a3f3493ab87d1743f43a9bdd6-merged.mount: Deactivated successfully.
Jan 31 05:28:59 np0005603787 podman[250748]: 2026-01-31 10:28:59.982556872 +0000 UTC m=+0.217781794 container remove 48c11b1751a466220db4eb86c713a6a459f8962c9c8101470a4f85a8d4ea286c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_noyce, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:28:59 np0005603787 systemd[1]: libpod-conmon-48c11b1751a466220db4eb86c713a6a459f8962c9c8101470a4f85a8d4ea286c.scope: Deactivated successfully.
Jan 31 05:29:00 np0005603787 podman[250787]: 2026-01-31 10:29:00.129142133 +0000 UTC m=+0.049310883 container create 23a41ca7cfb3c58e70c3199dd39950036591454fcdf89dc36ae6115170af795f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_noether, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 05:29:00 np0005603787 systemd[1]: Started libpod-conmon-23a41ca7cfb3c58e70c3199dd39950036591454fcdf89dc36ae6115170af795f.scope.
Jan 31 05:29:00 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:29:00 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d72a55779428f369c9442ae5456262ae044b2ef56ff2c803a939809a0b65c9d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:29:00 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d72a55779428f369c9442ae5456262ae044b2ef56ff2c803a939809a0b65c9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:29:00 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d72a55779428f369c9442ae5456262ae044b2ef56ff2c803a939809a0b65c9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:29:00 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d72a55779428f369c9442ae5456262ae044b2ef56ff2c803a939809a0b65c9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:29:00 np0005603787 podman[250787]: 2026-01-31 10:29:00.110793107 +0000 UTC m=+0.030961907 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:29:00 np0005603787 podman[250787]: 2026-01-31 10:29:00.220977914 +0000 UTC m=+0.141146674 container init 23a41ca7cfb3c58e70c3199dd39950036591454fcdf89dc36ae6115170af795f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_noether, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 05:29:00 np0005603787 podman[250787]: 2026-01-31 10:29:00.229007631 +0000 UTC m=+0.149176391 container start 23a41ca7cfb3c58e70c3199dd39950036591454fcdf89dc36ae6115170af795f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 05:29:00 np0005603787 podman[250787]: 2026-01-31 10:29:00.232055593 +0000 UTC m=+0.152224363 container attach 23a41ca7cfb3c58e70c3199dd39950036591454fcdf89dc36ae6115170af795f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_noether, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:29:00 np0005603787 lvm[250883]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:29:00 np0005603787 lvm[250882]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:29:00 np0005603787 lvm[250882]: VG ceph_vg0 finished
Jan 31 05:29:00 np0005603787 lvm[250883]: VG ceph_vg1 finished
Jan 31 05:29:00 np0005603787 lvm[250885]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:29:00 np0005603787 lvm[250885]: VG ceph_vg2 finished
Jan 31 05:29:00 np0005603787 confident_noether[250803]: {}
Jan 31 05:29:00 np0005603787 systemd[1]: libpod-23a41ca7cfb3c58e70c3199dd39950036591454fcdf89dc36ae6115170af795f.scope: Deactivated successfully.
Jan 31 05:29:00 np0005603787 systemd[1]: libpod-23a41ca7cfb3c58e70c3199dd39950036591454fcdf89dc36ae6115170af795f.scope: Consumed 1.078s CPU time.
Jan 31 05:29:00 np0005603787 podman[250787]: 2026-01-31 10:29:00.961845209 +0000 UTC m=+0.882013989 container died 23a41ca7cfb3c58e70c3199dd39950036591454fcdf89dc36ae6115170af795f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_noether, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:29:00 np0005603787 systemd[1]: var-lib-containers-storage-overlay-6d72a55779428f369c9442ae5456262ae044b2ef56ff2c803a939809a0b65c9d-merged.mount: Deactivated successfully.
Jan 31 05:29:01 np0005603787 podman[250787]: 2026-01-31 10:29:01.011726766 +0000 UTC m=+0.931895556 container remove 23a41ca7cfb3c58e70c3199dd39950036591454fcdf89dc36ae6115170af795f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_noether, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:29:01 np0005603787 systemd[1]: libpod-conmon-23a41ca7cfb3c58e70c3199dd39950036591454fcdf89dc36ae6115170af795f.scope: Deactivated successfully.
Jan 31 05:29:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:29:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:29:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:29:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:29:01 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:29:01 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:29:01 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:29:03 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:29:03 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:29:06 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:29:07 np0005603787 nova_compute[238603]: 2026-01-31 10:29:07.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:29:08 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:29:08 np0005603787 nova_compute[238603]: 2026-01-31 10:29:08.116 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:29:08 np0005603787 nova_compute[238603]: 2026-01-31 10:29:08.116 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:29:08 np0005603787 nova_compute[238603]: 2026-01-31 10:29:08.117 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 05:29:08 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:29:09 np0005603787 nova_compute[238603]: 2026-01-31 10:29:09.103 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:29:09 np0005603787 nova_compute[238603]: 2026-01-31 10:29:09.104 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:29:09 np0005603787 nova_compute[238603]: 2026-01-31 10:29:09.104 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:29:09 np0005603787 nova_compute[238603]: 2026-01-31 10:29:09.104 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 05:29:09 np0005603787 nova_compute[238603]: 2026-01-31 10:29:09.123 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 05:29:09 np0005603787 podman[250928]: 2026-01-31 10:29:09.870005122 +0000 UTC m=+0.072252953 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 05:29:09 np0005603787 podman[250927]: 2026-01-31 10:29:09.904157784 +0000 UTC m=+0.106140657 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 05:29:10 np0005603787 nova_compute[238603]: 2026-01-31 10:29:10.122 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:29:10 np0005603787 nova_compute[238603]: 2026-01-31 10:29:10.123 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 05:29:10 np0005603787 nova_compute[238603]: 2026-01-31 10:29:10.123 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 05:29:10 np0005603787 nova_compute[238603]: 2026-01-31 10:29:10.152 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 05:29:10 np0005603787 nova_compute[238603]: 2026-01-31 10:29:10.153 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:29:10 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:29:11 np0005603787 nova_compute[238603]: 2026-01-31 10:29:11.128 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:29:12 np0005603787 nova_compute[238603]: 2026-01-31 10:29:12.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:29:12 np0005603787 nova_compute[238603]: 2026-01-31 10:29:12.141 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:29:12 np0005603787 nova_compute[238603]: 2026-01-31 10:29:12.142 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:29:12 np0005603787 nova_compute[238603]: 2026-01-31 10:29:12.142 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:29:12 np0005603787 nova_compute[238603]: 2026-01-31 10:29:12.143 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 05:29:12 np0005603787 nova_compute[238603]: 2026-01-31 10:29:12.143 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:29:12 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:29:12 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4162199554' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:29:12 np0005603787 nova_compute[238603]: 2026-01-31 10:29:12.626 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:29:12 np0005603787 nova_compute[238603]: 2026-01-31 10:29:12.782 238607 WARNING nova.virt.libvirt.driver [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 05:29:12 np0005603787 nova_compute[238603]: 2026-01-31 10:29:12.783 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5080MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 05:29:12 np0005603787 nova_compute[238603]: 2026-01-31 10:29:12.783 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:29:12 np0005603787 nova_compute[238603]: 2026-01-31 10:29:12.783 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:29:12 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:29:13 np0005603787 nova_compute[238603]: 2026-01-31 10:29:13.034 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 05:29:13 np0005603787 nova_compute[238603]: 2026-01-31 10:29:13.035 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 05:29:13 np0005603787 nova_compute[238603]: 2026-01-31 10:29:13.091 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Refreshing inventories for resource provider 207962d2-1ba9-4db2-8533-2a30e7131f3e _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 05:29:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:29:13 np0005603787 nova_compute[238603]: 2026-01-31 10:29:13.150 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Updating ProviderTree inventory for provider 207962d2-1ba9-4db2-8533-2a30e7131f3e from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 05:29:13 np0005603787 nova_compute[238603]: 2026-01-31 10:29:13.151 238607 DEBUG nova.compute.provider_tree [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Updating inventory in ProviderTree for provider 207962d2-1ba9-4db2-8533-2a30e7131f3e with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 05:29:13 np0005603787 nova_compute[238603]: 2026-01-31 10:29:13.171 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Refreshing aggregate associations for resource provider 207962d2-1ba9-4db2-8533-2a30e7131f3e, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 05:29:13 np0005603787 nova_compute[238603]: 2026-01-31 10:29:13.196 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Refreshing trait associations for resource provider 207962d2-1ba9-4db2-8533-2a30e7131f3e, traits: COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SVM,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AESNI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_FMA3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE41,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_RESCUE_BFV,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_MMX,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NODE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_SHA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 05:29:13 np0005603787 nova_compute[238603]: 2026-01-31 10:29:13.225 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:29:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:29:13 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1215957779' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:29:13 np0005603787 nova_compute[238603]: 2026-01-31 10:29:13.724 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:29:13 np0005603787 nova_compute[238603]: 2026-01-31 10:29:13.729 238607 DEBUG nova.compute.provider_tree [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed in ProviderTree for provider: 207962d2-1ba9-4db2-8533-2a30e7131f3e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 05:29:13 np0005603787 nova_compute[238603]: 2026-01-31 10:29:13.748 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed for provider 207962d2-1ba9-4db2-8533-2a30e7131f3e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 05:29:13 np0005603787 nova_compute[238603]: 2026-01-31 10:29:13.750 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 05:29:13 np0005603787 nova_compute[238603]: 2026-01-31 10:29:13.751 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.968s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:29:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:29:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:29:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:29:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:29:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:29:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:29:14 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:29:16 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:29:17 np0005603787 nova_compute[238603]: 2026-01-31 10:29:17.751 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:29:18 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:29:18 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:29:20 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:29:21 np0005603787 nova_compute[238603]: 2026-01-31 10:29:21.104 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:29:21 np0005603787 nova_compute[238603]: 2026-01-31 10:29:21.104 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 05:29:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 05:29:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3955212313' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 05:29:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 05:29:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3955212313' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 05:29:22 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:29:23 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:29:24 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:29:26 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:29:28 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:29:28 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 5 op/s
Jan 31 05:29:30 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 30 op/s
Jan 31 05:29:32 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 56 op/s
Jan 31 05:29:33 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:29:34 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1084: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Jan 31 05:29:36 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Jan 31 05:29:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:29:37.071 154765 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:29:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:29:37.072 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:29:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:29:37.072 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:29:38 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:29:38 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Jan 31 05:29:40 np0005603787 podman[251015]: 2026-01-31 10:29:40.862322861 +0000 UTC m=+0.078770758 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 05:29:40 np0005603787 podman[251014]: 2026-01-31 10:29:40.929988998 +0000 UTC m=+0.146854896 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 05:29:40 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 0 B/s wr, 69 op/s
Jan 31 05:29:42 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 45 op/s
Jan 31 05:29:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:29:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_10:29:43
Jan 31 05:29:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:29:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] do_upmap
Jan 31 05:29:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] pools ['backups', 'images', 'default.rgw.log', 'volumes', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr']
Jan 31 05:29:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:29:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:29:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:29:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:29:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:29:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:29:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:29:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:29:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:29:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:29:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:29:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:29:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:29:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:29:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:29:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:29:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:29:44 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Jan 31 05:29:46 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:29:48 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:29:48 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:29:50 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:29:52 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:29:53 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:29:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:29:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:29:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:29:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:29:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:29:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:29:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:29:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:29:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:29:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:29:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 2.450943614167069e-07 of space, bias 1.0, pg target 7.352830842501207e-05 quantized to 32 (current 32)
Jan 31 05:29:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:29:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.527403468629877e-06 of space, bias 4.0, pg target 0.0018328841623558524 quantized to 16 (current 16)
Jan 31 05:29:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:29:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:29:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:29:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:29:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:29:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 05:29:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:29:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:29:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:29:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:29:54 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:29:56 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:29:58 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:29:58 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:30:00 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:30:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:30:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:30:01 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:30:01 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:30:02 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:30:02 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:30:02 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:30:02 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:30:02 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:30:02 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:30:02 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:30:02 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:30:02 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:30:02 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:30:02 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:30:02 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:30:02 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:30:02 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:30:02 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:30:02 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:30:02 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:30:02 np0005603787 podman[251273]: 2026-01-31 10:30:02.702861945 +0000 UTC m=+0.084891595 container create 826bea73e9e2809e7c471af0c3ce20a835fc90aeaa4be82d75222722c97207f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_haslett, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:30:02 np0005603787 systemd[1]: Started libpod-conmon-826bea73e9e2809e7c471af0c3ce20a835fc90aeaa4be82d75222722c97207f6.scope.
Jan 31 05:30:02 np0005603787 podman[251273]: 2026-01-31 10:30:02.664831564 +0000 UTC m=+0.046861284 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:30:02 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:30:02 np0005603787 podman[251273]: 2026-01-31 10:30:02.806010835 +0000 UTC m=+0.188040485 container init 826bea73e9e2809e7c471af0c3ce20a835fc90aeaa4be82d75222722c97207f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_haslett, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 05:30:02 np0005603787 podman[251273]: 2026-01-31 10:30:02.814363121 +0000 UTC m=+0.196392751 container start 826bea73e9e2809e7c471af0c3ce20a835fc90aeaa4be82d75222722c97207f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 05:30:02 np0005603787 sharp_haslett[251289]: 167 167
Jan 31 05:30:02 np0005603787 podman[251273]: 2026-01-31 10:30:02.821287289 +0000 UTC m=+0.203316919 container attach 826bea73e9e2809e7c471af0c3ce20a835fc90aeaa4be82d75222722c97207f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_haslett, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 05:30:02 np0005603787 systemd[1]: libpod-826bea73e9e2809e7c471af0c3ce20a835fc90aeaa4be82d75222722c97207f6.scope: Deactivated successfully.
Jan 31 05:30:02 np0005603787 conmon[251289]: conmon 826bea73e9e2809e7c47 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-826bea73e9e2809e7c471af0c3ce20a835fc90aeaa4be82d75222722c97207f6.scope/container/memory.events
Jan 31 05:30:02 np0005603787 podman[251273]: 2026-01-31 10:30:02.823649653 +0000 UTC m=+0.205679323 container died 826bea73e9e2809e7c471af0c3ce20a835fc90aeaa4be82d75222722c97207f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_haslett, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 05:30:02 np0005603787 systemd[1]: var-lib-containers-storage-overlay-2c9f67a7d53c622872a5ce2971ab44ac5445f3dcb0a357587a4abf4f1da5cd75-merged.mount: Deactivated successfully.
Jan 31 05:30:02 np0005603787 podman[251273]: 2026-01-31 10:30:02.891229177 +0000 UTC m=+0.273258807 container remove 826bea73e9e2809e7c471af0c3ce20a835fc90aeaa4be82d75222722c97207f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_haslett, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 05:30:02 np0005603787 systemd[1]: libpod-conmon-826bea73e9e2809e7c471af0c3ce20a835fc90aeaa4be82d75222722c97207f6.scope: Deactivated successfully.
Jan 31 05:30:02 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:30:03 np0005603787 podman[251313]: 2026-01-31 10:30:03.008973612 +0000 UTC m=+0.039637856 container create b7da44ec2208db59b3a76907ee0a147b1ce57905fd488c84721a3718fff3d823 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 05:30:03 np0005603787 systemd[1]: Started libpod-conmon-b7da44ec2208db59b3a76907ee0a147b1ce57905fd488c84721a3718fff3d823.scope.
Jan 31 05:30:03 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:30:03 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55205536bed27c0438fb5fe3097c64fc4ea531deb5fe6636b37abf6f8238383f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:30:03 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55205536bed27c0438fb5fe3097c64fc4ea531deb5fe6636b37abf6f8238383f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:30:03 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55205536bed27c0438fb5fe3097c64fc4ea531deb5fe6636b37abf6f8238383f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:30:03 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55205536bed27c0438fb5fe3097c64fc4ea531deb5fe6636b37abf6f8238383f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:30:03 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55205536bed27c0438fb5fe3097c64fc4ea531deb5fe6636b37abf6f8238383f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:30:03 np0005603787 podman[251313]: 2026-01-31 10:30:02.992940628 +0000 UTC m=+0.023604902 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:30:03 np0005603787 podman[251313]: 2026-01-31 10:30:03.097692081 +0000 UTC m=+0.128356385 container init b7da44ec2208db59b3a76907ee0a147b1ce57905fd488c84721a3718fff3d823 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_saha, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:30:03 np0005603787 podman[251313]: 2026-01-31 10:30:03.103093427 +0000 UTC m=+0.133757681 container start b7da44ec2208db59b3a76907ee0a147b1ce57905fd488c84721a3718fff3d823 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_saha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 05:30:03 np0005603787 podman[251313]: 2026-01-31 10:30:03.106711945 +0000 UTC m=+0.137376199 container attach b7da44ec2208db59b3a76907ee0a147b1ce57905fd488c84721a3718fff3d823 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_saha, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:30:03 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:30:03 np0005603787 bold_saha[251329]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:30:03 np0005603787 bold_saha[251329]: --> All data devices are unavailable
Jan 31 05:30:03 np0005603787 systemd[1]: libpod-b7da44ec2208db59b3a76907ee0a147b1ce57905fd488c84721a3718fff3d823.scope: Deactivated successfully.
Jan 31 05:30:03 np0005603787 podman[251313]: 2026-01-31 10:30:03.564449068 +0000 UTC m=+0.595113352 container died b7da44ec2208db59b3a76907ee0a147b1ce57905fd488c84721a3718fff3d823 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_saha, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:30:03 np0005603787 systemd[1]: var-lib-containers-storage-overlay-55205536bed27c0438fb5fe3097c64fc4ea531deb5fe6636b37abf6f8238383f-merged.mount: Deactivated successfully.
Jan 31 05:30:03 np0005603787 podman[251313]: 2026-01-31 10:30:03.660748751 +0000 UTC m=+0.691412995 container remove b7da44ec2208db59b3a76907ee0a147b1ce57905fd488c84721a3718fff3d823 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_saha, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:30:03 np0005603787 systemd[1]: libpod-conmon-b7da44ec2208db59b3a76907ee0a147b1ce57905fd488c84721a3718fff3d823.scope: Deactivated successfully.
Jan 31 05:30:04 np0005603787 podman[251424]: 2026-01-31 10:30:04.140238614 +0000 UTC m=+0.047860220 container create 383e955d9f8ee962e3c66b058daedab18e386e49bc3aeb8ddda532033fb38e12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kalam, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True)
Jan 31 05:30:04 np0005603787 systemd[1]: Started libpod-conmon-383e955d9f8ee962e3c66b058daedab18e386e49bc3aeb8ddda532033fb38e12.scope.
Jan 31 05:30:04 np0005603787 podman[251424]: 2026-01-31 10:30:04.111466354 +0000 UTC m=+0.019087980 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:30:04 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:30:04 np0005603787 podman[251424]: 2026-01-31 10:30:04.254248009 +0000 UTC m=+0.161869675 container init 383e955d9f8ee962e3c66b058daedab18e386e49bc3aeb8ddda532033fb38e12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 05:30:04 np0005603787 podman[251424]: 2026-01-31 10:30:04.261458754 +0000 UTC m=+0.169080360 container start 383e955d9f8ee962e3c66b058daedab18e386e49bc3aeb8ddda532033fb38e12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 05:30:04 np0005603787 pedantic_kalam[251440]: 167 167
Jan 31 05:30:04 np0005603787 systemd[1]: libpod-383e955d9f8ee962e3c66b058daedab18e386e49bc3aeb8ddda532033fb38e12.scope: Deactivated successfully.
Jan 31 05:30:04 np0005603787 podman[251424]: 2026-01-31 10:30:04.293657418 +0000 UTC m=+0.201279034 container attach 383e955d9f8ee962e3c66b058daedab18e386e49bc3aeb8ddda532033fb38e12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kalam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 05:30:04 np0005603787 podman[251424]: 2026-01-31 10:30:04.295165839 +0000 UTC m=+0.202787475 container died 383e955d9f8ee962e3c66b058daedab18e386e49bc3aeb8ddda532033fb38e12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 05:30:04 np0005603787 systemd[1]: var-lib-containers-storage-overlay-5ecac4fbb0f09320cd7e1ad87612f8456455d86a2b97c75692e747065b07c89f-merged.mount: Deactivated successfully.
Jan 31 05:30:04 np0005603787 podman[251424]: 2026-01-31 10:30:04.51997424 +0000 UTC m=+0.427595856 container remove 383e955d9f8ee962e3c66b058daedab18e386e49bc3aeb8ddda532033fb38e12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kalam, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3)
Jan 31 05:30:04 np0005603787 systemd[1]: libpod-conmon-383e955d9f8ee962e3c66b058daedab18e386e49bc3aeb8ddda532033fb38e12.scope: Deactivated successfully.
Jan 31 05:30:04 np0005603787 podman[251466]: 2026-01-31 10:30:04.662831967 +0000 UTC m=+0.048502967 container create 3084640fbb8c3a95cf7a79d847080072896ad82712cccdf2ecf8a1e8c1834ec3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Jan 31 05:30:04 np0005603787 systemd[1]: Started libpod-conmon-3084640fbb8c3a95cf7a79d847080072896ad82712cccdf2ecf8a1e8c1834ec3.scope.
Jan 31 05:30:04 np0005603787 podman[251466]: 2026-01-31 10:30:04.637964552 +0000 UTC m=+0.023635602 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:30:04 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:30:04 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ad1b45ed4626603d5870fba530291b68fabca04441faa06d68011d9914fb883/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:30:04 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ad1b45ed4626603d5870fba530291b68fabca04441faa06d68011d9914fb883/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:30:04 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ad1b45ed4626603d5870fba530291b68fabca04441faa06d68011d9914fb883/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:30:04 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ad1b45ed4626603d5870fba530291b68fabca04441faa06d68011d9914fb883/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:30:04 np0005603787 podman[251466]: 2026-01-31 10:30:04.754742321 +0000 UTC m=+0.140413361 container init 3084640fbb8c3a95cf7a79d847080072896ad82712cccdf2ecf8a1e8c1834ec3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_mayer, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:30:04 np0005603787 podman[251466]: 2026-01-31 10:30:04.760345073 +0000 UTC m=+0.146016073 container start 3084640fbb8c3a95cf7a79d847080072896ad82712cccdf2ecf8a1e8c1834ec3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_mayer, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 05:30:04 np0005603787 podman[251466]: 2026-01-31 10:30:04.783329368 +0000 UTC m=+0.169000418 container attach 3084640fbb8c3a95cf7a79d847080072896ad82712cccdf2ecf8a1e8c1834ec3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3)
Jan 31 05:30:04 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]: {
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:    "0": [
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:        {
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:            "devices": [
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "/dev/loop3"
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:            ],
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:            "lv_name": "ceph_lv0",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:            "lv_size": "21470642176",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:            "name": "ceph_lv0",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:            "tags": {
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "ceph.cluster_name": "ceph",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "ceph.crush_device_class": "",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "ceph.encrypted": "0",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "ceph.objectstore": "bluestore",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "ceph.osd_id": "0",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "ceph.type": "block",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "ceph.vdo": "0",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "ceph.with_tpm": "0"
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:            },
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:            "type": "block",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:            "vg_name": "ceph_vg0"
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:        }
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:    ],
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:    "1": [
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:        {
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:            "devices": [
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "/dev/loop4"
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:            ],
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:            "lv_name": "ceph_lv1",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:            "lv_size": "21470642176",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:            "name": "ceph_lv1",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:            "tags": {
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "ceph.cluster_name": "ceph",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "ceph.crush_device_class": "",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "ceph.encrypted": "0",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "ceph.objectstore": "bluestore",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "ceph.osd_id": "1",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "ceph.type": "block",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "ceph.vdo": "0",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "ceph.with_tpm": "0"
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:            },
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:            "type": "block",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:            "vg_name": "ceph_vg1"
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:        }
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:    ],
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:    "2": [
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:        {
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:            "devices": [
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "/dev/loop5"
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:            ],
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:            "lv_name": "ceph_lv2",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:            "lv_size": "21470642176",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:            "name": "ceph_lv2",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:            "tags": {
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "ceph.cluster_name": "ceph",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "ceph.crush_device_class": "",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "ceph.encrypted": "0",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "ceph.objectstore": "bluestore",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "ceph.osd_id": "2",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "ceph.type": "block",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "ceph.vdo": "0",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:                "ceph.with_tpm": "0"
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:            },
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:            "type": "block",
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:            "vg_name": "ceph_vg2"
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:        }
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]:    ]
Jan 31 05:30:04 np0005603787 nifty_mayer[251483]: }
Jan 31 05:30:05 np0005603787 systemd[1]: libpod-3084640fbb8c3a95cf7a79d847080072896ad82712cccdf2ecf8a1e8c1834ec3.scope: Deactivated successfully.
Jan 31 05:30:05 np0005603787 podman[251466]: 2026-01-31 10:30:05.025453159 +0000 UTC m=+0.411124159 container died 3084640fbb8c3a95cf7a79d847080072896ad82712cccdf2ecf8a1e8c1834ec3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 05:30:05 np0005603787 systemd[1]: var-lib-containers-storage-overlay-1ad1b45ed4626603d5870fba530291b68fabca04441faa06d68011d9914fb883-merged.mount: Deactivated successfully.
Jan 31 05:30:05 np0005603787 podman[251466]: 2026-01-31 10:30:05.125302378 +0000 UTC m=+0.510973408 container remove 3084640fbb8c3a95cf7a79d847080072896ad82712cccdf2ecf8a1e8c1834ec3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_mayer, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:30:05 np0005603787 systemd[1]: libpod-conmon-3084640fbb8c3a95cf7a79d847080072896ad82712cccdf2ecf8a1e8c1834ec3.scope: Deactivated successfully.
Jan 31 05:30:05 np0005603787 podman[251564]: 2026-01-31 10:30:05.622785769 +0000 UTC m=+0.061941481 container create 92904a8d2df19e7a48d2adebbf73ff2e9e1192beb5caf43b723b9294d90aa16a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:30:05 np0005603787 systemd[1]: Started libpod-conmon-92904a8d2df19e7a48d2adebbf73ff2e9e1192beb5caf43b723b9294d90aa16a.scope.
Jan 31 05:30:05 np0005603787 podman[251564]: 2026-01-31 10:30:05.592303213 +0000 UTC m=+0.031458985 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:30:05 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:30:05 np0005603787 podman[251564]: 2026-01-31 10:30:05.711227039 +0000 UTC m=+0.150382811 container init 92904a8d2df19e7a48d2adebbf73ff2e9e1192beb5caf43b723b9294d90aa16a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_goodall, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:30:05 np0005603787 podman[251564]: 2026-01-31 10:30:05.719941667 +0000 UTC m=+0.159097379 container start 92904a8d2df19e7a48d2adebbf73ff2e9e1192beb5caf43b723b9294d90aa16a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:30:05 np0005603787 hungry_goodall[251581]: 167 167
Jan 31 05:30:05 np0005603787 systemd[1]: libpod-92904a8d2df19e7a48d2adebbf73ff2e9e1192beb5caf43b723b9294d90aa16a.scope: Deactivated successfully.
Jan 31 05:30:05 np0005603787 podman[251564]: 2026-01-31 10:30:05.726622397 +0000 UTC m=+0.165778159 container attach 92904a8d2df19e7a48d2adebbf73ff2e9e1192beb5caf43b723b9294d90aa16a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 05:30:05 np0005603787 podman[251564]: 2026-01-31 10:30:05.727858931 +0000 UTC m=+0.167014643 container died 92904a8d2df19e7a48d2adebbf73ff2e9e1192beb5caf43b723b9294d90aa16a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_goodall, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 05:30:05 np0005603787 systemd[1]: var-lib-containers-storage-overlay-7c7cda10ae9481b73b5e4cdb30f2e6d5d7be60c64da9d1fc553fc2401bd8d1e5-merged.mount: Deactivated successfully.
Jan 31 05:30:05 np0005603787 podman[251564]: 2026-01-31 10:30:05.967273348 +0000 UTC m=+0.406429030 container remove 92904a8d2df19e7a48d2adebbf73ff2e9e1192beb5caf43b723b9294d90aa16a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_goodall, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True)
Jan 31 05:30:05 np0005603787 systemd[1]: libpod-conmon-92904a8d2df19e7a48d2adebbf73ff2e9e1192beb5caf43b723b9294d90aa16a.scope: Deactivated successfully.
Jan 31 05:30:06 np0005603787 podman[251605]: 2026-01-31 10:30:06.143161652 +0000 UTC m=+0.056111044 container create f532853dc4a24e48b8b3ff2dfb4c4ead231221b236764680028c606ca75c4ed6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_swanson, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 05:30:06 np0005603787 systemd[1]: Started libpod-conmon-f532853dc4a24e48b8b3ff2dfb4c4ead231221b236764680028c606ca75c4ed6.scope.
Jan 31 05:30:06 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:30:06 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a99b6af787695a98f10398bd4b50f0ec91ff3286e7ccac1883840ec8bccf103/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:30:06 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a99b6af787695a98f10398bd4b50f0ec91ff3286e7ccac1883840ec8bccf103/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:30:06 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a99b6af787695a98f10398bd4b50f0ec91ff3286e7ccac1883840ec8bccf103/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:30:06 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a99b6af787695a98f10398bd4b50f0ec91ff3286e7ccac1883840ec8bccf103/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:30:06 np0005603787 podman[251605]: 2026-01-31 10:30:06.117721992 +0000 UTC m=+0.030671464 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:30:06 np0005603787 podman[251605]: 2026-01-31 10:30:06.234700696 +0000 UTC m=+0.147650168 container init f532853dc4a24e48b8b3ff2dfb4c4ead231221b236764680028c606ca75c4ed6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_swanson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 05:30:06 np0005603787 podman[251605]: 2026-01-31 10:30:06.250007522 +0000 UTC m=+0.162956944 container start f532853dc4a24e48b8b3ff2dfb4c4ead231221b236764680028c606ca75c4ed6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_swanson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 05:30:06 np0005603787 podman[251605]: 2026-01-31 10:30:06.254747791 +0000 UTC m=+0.167697273 container attach f532853dc4a24e48b8b3ff2dfb4c4ead231221b236764680028c606ca75c4ed6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_swanson, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:30:06 np0005603787 lvm[251700]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:30:06 np0005603787 lvm[251700]: VG ceph_vg1 finished
Jan 31 05:30:06 np0005603787 lvm[251699]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:30:06 np0005603787 lvm[251699]: VG ceph_vg0 finished
Jan 31 05:30:06 np0005603787 lvm[251702]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:30:06 np0005603787 lvm[251702]: VG ceph_vg2 finished
Jan 31 05:30:06 np0005603787 strange_swanson[251621]: {}
Jan 31 05:30:06 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:30:06 np0005603787 systemd[1]: libpod-f532853dc4a24e48b8b3ff2dfb4c4ead231221b236764680028c606ca75c4ed6.scope: Deactivated successfully.
Jan 31 05:30:06 np0005603787 systemd[1]: libpod-f532853dc4a24e48b8b3ff2dfb4c4ead231221b236764680028c606ca75c4ed6.scope: Consumed 1.022s CPU time.
Jan 31 05:30:06 np0005603787 podman[251605]: 2026-01-31 10:30:06.977475415 +0000 UTC m=+0.890424807 container died f532853dc4a24e48b8b3ff2dfb4c4ead231221b236764680028c606ca75c4ed6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 05:30:07 np0005603787 systemd[1]: var-lib-containers-storage-overlay-6a99b6af787695a98f10398bd4b50f0ec91ff3286e7ccac1883840ec8bccf103-merged.mount: Deactivated successfully.
Jan 31 05:30:07 np0005603787 podman[251605]: 2026-01-31 10:30:07.028289414 +0000 UTC m=+0.941238796 container remove f532853dc4a24e48b8b3ff2dfb4c4ead231221b236764680028c606ca75c4ed6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_swanson, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:30:07 np0005603787 systemd[1]: libpod-conmon-f532853dc4a24e48b8b3ff2dfb4c4ead231221b236764680028c606ca75c4ed6.scope: Deactivated successfully.
Jan 31 05:30:07 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:30:07 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:30:07 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:30:07 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:30:08 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:30:08 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:30:08 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:30:08 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:30:09 np0005603787 nova_compute[238603]: 2026-01-31 10:30:09.118 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:30:09 np0005603787 nova_compute[238603]: 2026-01-31 10:30:09.119 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:30:09 np0005603787 nova_compute[238603]: 2026-01-31 10:30:09.119 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 05:30:10 np0005603787 nova_compute[238603]: 2026-01-31 10:30:10.103 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:30:10 np0005603787 nova_compute[238603]: 2026-01-31 10:30:10.104 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:30:10 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:30:11 np0005603787 nova_compute[238603]: 2026-01-31 10:30:11.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:30:11 np0005603787 nova_compute[238603]: 2026-01-31 10:30:11.103 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 05:30:11 np0005603787 nova_compute[238603]: 2026-01-31 10:30:11.103 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 05:30:11 np0005603787 nova_compute[238603]: 2026-01-31 10:30:11.121 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 05:30:11 np0005603787 nova_compute[238603]: 2026-01-31 10:30:11.122 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:30:11 np0005603787 podman[251744]: 2026-01-31 10:30:11.849993062 +0000 UTC m=+0.065354785 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 05:30:11 np0005603787 podman[251743]: 2026-01-31 10:30:11.903175666 +0000 UTC m=+0.120613605 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 05:30:12 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:30:13 np0005603787 nova_compute[238603]: 2026-01-31 10:30:13.118 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:30:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:30:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:30:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:30:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:30:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:30:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:30:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:30:14 np0005603787 nova_compute[238603]: 2026-01-31 10:30:14.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:30:14 np0005603787 nova_compute[238603]: 2026-01-31 10:30:14.142 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:30:14 np0005603787 nova_compute[238603]: 2026-01-31 10:30:14.142 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:30:14 np0005603787 nova_compute[238603]: 2026-01-31 10:30:14.143 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:30:14 np0005603787 nova_compute[238603]: 2026-01-31 10:30:14.143 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 05:30:14 np0005603787 nova_compute[238603]: 2026-01-31 10:30:14.143 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:30:14 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:30:14 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3218407145' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:30:14 np0005603787 nova_compute[238603]: 2026-01-31 10:30:14.679 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:30:14 np0005603787 nova_compute[238603]: 2026-01-31 10:30:14.872 238607 WARNING nova.virt.libvirt.driver [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 05:30:14 np0005603787 nova_compute[238603]: 2026-01-31 10:30:14.873 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5035MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 05:30:14 np0005603787 nova_compute[238603]: 2026-01-31 10:30:14.873 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:30:14 np0005603787 nova_compute[238603]: 2026-01-31 10:30:14.874 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:30:14 np0005603787 nova_compute[238603]: 2026-01-31 10:30:14.939 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 05:30:14 np0005603787 nova_compute[238603]: 2026-01-31 10:30:14.940 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 05:30:14 np0005603787 nova_compute[238603]: 2026-01-31 10:30:14.960 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:30:14 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:30:15 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:30:15 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1322481377' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:30:15 np0005603787 nova_compute[238603]: 2026-01-31 10:30:15.466 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:30:15 np0005603787 nova_compute[238603]: 2026-01-31 10:30:15.478 238607 DEBUG nova.compute.provider_tree [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed in ProviderTree for provider: 207962d2-1ba9-4db2-8533-2a30e7131f3e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 05:30:15 np0005603787 nova_compute[238603]: 2026-01-31 10:30:15.497 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed for provider 207962d2-1ba9-4db2-8533-2a30e7131f3e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 05:30:15 np0005603787 nova_compute[238603]: 2026-01-31 10:30:15.499 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 05:30:15 np0005603787 nova_compute[238603]: 2026-01-31 10:30:15.499 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:30:16 np0005603787 nova_compute[238603]: 2026-01-31 10:30:16.495 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:30:16 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:30:17 np0005603787 nova_compute[238603]: 2026-01-31 10:30:17.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:30:18 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:30:18 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:30:20 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:30:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 05:30:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2153083567' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 05:30:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 05:30:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2153083567' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 05:30:22 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:30:23 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:30:24 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:30:26 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:30:28 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:30:28 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Jan 31 05:30:28 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:30:28.615013) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 05:30:28 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Jan 31 05:30:28 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769855428615050, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2058, "num_deletes": 251, "total_data_size": 3557679, "memory_usage": 3609376, "flush_reason": "Manual Compaction"}
Jan 31 05:30:28 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Jan 31 05:30:28 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769855428649880, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3457773, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21082, "largest_seqno": 23139, "table_properties": {"data_size": 3448455, "index_size": 5876, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18655, "raw_average_key_size": 19, "raw_value_size": 3429893, "raw_average_value_size": 3672, "num_data_blocks": 266, "num_entries": 934, "num_filter_entries": 934, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769855203, "oldest_key_time": 1769855203, "file_creation_time": 1769855428, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a9067dea-12fb-43c2-8d5c-dbf66227f0e8", "db_session_id": "EXKALWXQ4I64EWVUMKE5", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:30:28 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 35033 microseconds, and 10009 cpu microseconds.
Jan 31 05:30:28 np0005603787 ceph-mon[75160]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 05:30:28 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:30:28.650043) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3457773 bytes OK
Jan 31 05:30:28 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:30:28.650146) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Jan 31 05:30:28 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:30:28.652953) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Jan 31 05:30:28 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:30:28.652980) EVENT_LOG_v1 {"time_micros": 1769855428652971, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 05:30:28 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:30:28.653006) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 05:30:28 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3549046, prev total WAL file size 3549046, number of live WAL files 2.
Jan 31 05:30:28 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:30:28 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:30:28.654242) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Jan 31 05:30:28 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 05:30:28 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3376KB)], [50(7600KB)]
Jan 31 05:30:28 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769855428654284, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 11240745, "oldest_snapshot_seqno": -1}
Jan 31 05:30:28 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4828 keys, 9468447 bytes, temperature: kUnknown
Jan 31 05:30:28 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769855428713535, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9468447, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9433484, "index_size": 21797, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12101, "raw_key_size": 118202, "raw_average_key_size": 24, "raw_value_size": 9343636, "raw_average_value_size": 1935, "num_data_blocks": 915, "num_entries": 4828, "num_filter_entries": 4828, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769853439, "oldest_key_time": 0, "file_creation_time": 1769855428, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a9067dea-12fb-43c2-8d5c-dbf66227f0e8", "db_session_id": "EXKALWXQ4I64EWVUMKE5", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:30:28 np0005603787 ceph-mon[75160]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 05:30:28 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:30:28.713808) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9468447 bytes
Jan 31 05:30:28 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:30:28.716468) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 189.4 rd, 159.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.4 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(6.0) write-amplify(2.7) OK, records in: 5342, records dropped: 514 output_compression: NoCompression
Jan 31 05:30:28 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:30:28.716499) EVENT_LOG_v1 {"time_micros": 1769855428716484, "job": 26, "event": "compaction_finished", "compaction_time_micros": 59336, "compaction_time_cpu_micros": 27486, "output_level": 6, "num_output_files": 1, "total_output_size": 9468447, "num_input_records": 5342, "num_output_records": 4828, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 05:30:28 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:30:28 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769855428717044, "job": 26, "event": "table_file_deletion", "file_number": 52}
Jan 31 05:30:28 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:30:28 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769855428718160, "job": 26, "event": "table_file_deletion", "file_number": 50}
Jan 31 05:30:28 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:30:28.654172) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:30:28 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:30:28.718215) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:30:28 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:30:28.718221) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:30:28 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:30:28.718225) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:30:28 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:30:28.718228) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:30:28 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:30:28.718231) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:30:28 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:30:30 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:30:32 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:30:33 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:30:34 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:30:36 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:30:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:30:37.073 154765 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:30:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:30:37.074 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:30:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:30:37.074 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:30:38 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:30:38 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:30:40 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:30:42 np0005603787 podman[251833]: 2026-01-31 10:30:42.854131243 +0000 UTC m=+0.072182360 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 31 05:30:42 np0005603787 podman[251834]: 2026-01-31 10:30:42.856574189 +0000 UTC m=+0.069755324 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 05:30:42 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:30:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:30:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_10:30:43
Jan 31 05:30:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:30:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] do_upmap
Jan 31 05:30:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', '.rgw.root', 'backups', 'default.rgw.control', 'default.rgw.log', 'vms', 'volumes', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta']
Jan 31 05:30:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:30:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:30:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:30:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:30:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:30:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:30:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:30:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:30:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:30:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:30:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:30:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:30:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:30:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:30:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:30:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:30:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:30:44 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:30:46 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:30:48 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:30:48 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:30:50 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:30:52 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:30:53 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:30:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:30:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:30:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:30:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:30:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:30:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:30:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:30:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:30:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:30:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:30:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 2.450943614167069e-07 of space, bias 1.0, pg target 7.352830842501207e-05 quantized to 32 (current 32)
Jan 31 05:30:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:30:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.527403468629877e-06 of space, bias 4.0, pg target 0.0018328841623558524 quantized to 16 (current 16)
Jan 31 05:30:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:30:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:30:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:30:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:30:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:30:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 05:30:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:30:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:30:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:30:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:30:54 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:30:56 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:30:58 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:30:58 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:31:00 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:31:02 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:31:03 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:31:04 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:31:06 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1130: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:31:07 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 31 05:31:07 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 05:31:07 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:31:07 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:31:07 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:31:07 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:31:07 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:31:07 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:31:07 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:31:07 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:31:07 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:31:07 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:31:07 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:31:07 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:31:08 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:31:08 np0005603787 podman[252017]: 2026-01-31 10:31:08.198340801 +0000 UTC m=+0.039895653 container create cf249ad44ede6704339e174b7bf9ca5fd2fad49599fa0a35ba1df19536388372 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_williams, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 05:31:08 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 05:31:08 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:31:08 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:31:08 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:31:08 np0005603787 systemd[1]: Started libpod-conmon-cf249ad44ede6704339e174b7bf9ca5fd2fad49599fa0a35ba1df19536388372.scope.
Jan 31 05:31:08 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:31:08 np0005603787 podman[252017]: 2026-01-31 10:31:08.181436412 +0000 UTC m=+0.022991294 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:31:08 np0005603787 podman[252017]: 2026-01-31 10:31:08.285442825 +0000 UTC m=+0.126997667 container init cf249ad44ede6704339e174b7bf9ca5fd2fad49599fa0a35ba1df19536388372 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_williams, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 05:31:08 np0005603787 podman[252017]: 2026-01-31 10:31:08.293341259 +0000 UTC m=+0.134896101 container start cf249ad44ede6704339e174b7bf9ca5fd2fad49599fa0a35ba1df19536388372 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:31:08 np0005603787 nostalgic_williams[252034]: 167 167
Jan 31 05:31:08 np0005603787 podman[252017]: 2026-01-31 10:31:08.29776171 +0000 UTC m=+0.139316602 container attach cf249ad44ede6704339e174b7bf9ca5fd2fad49599fa0a35ba1df19536388372 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_williams, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:31:08 np0005603787 systemd[1]: libpod-cf249ad44ede6704339e174b7bf9ca5fd2fad49599fa0a35ba1df19536388372.scope: Deactivated successfully.
Jan 31 05:31:08 np0005603787 conmon[252034]: conmon cf249ad44ede6704339e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cf249ad44ede6704339e174b7bf9ca5fd2fad49599fa0a35ba1df19536388372.scope/container/memory.events
Jan 31 05:31:08 np0005603787 podman[252017]: 2026-01-31 10:31:08.300440992 +0000 UTC m=+0.141995824 container died cf249ad44ede6704339e174b7bf9ca5fd2fad49599fa0a35ba1df19536388372 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_williams, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 05:31:08 np0005603787 systemd[1]: var-lib-containers-storage-overlay-d5223282750c90ee335f6ed980df8aaa1e1898e675ab1822dcf34a029d986ec1-merged.mount: Deactivated successfully.
Jan 31 05:31:08 np0005603787 podman[252017]: 2026-01-31 10:31:08.343253544 +0000 UTC m=+0.184808376 container remove cf249ad44ede6704339e174b7bf9ca5fd2fad49599fa0a35ba1df19536388372 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:31:08 np0005603787 systemd[1]: libpod-conmon-cf249ad44ede6704339e174b7bf9ca5fd2fad49599fa0a35ba1df19536388372.scope: Deactivated successfully.
Jan 31 05:31:08 np0005603787 podman[252058]: 2026-01-31 10:31:08.529863458 +0000 UTC m=+0.061251842 container create cabab7c04304cce5c6a10db0cdaace3d2347706c5ada53ff5e09a313b6a83dd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:31:08 np0005603787 systemd[1]: Started libpod-conmon-cabab7c04304cce5c6a10db0cdaace3d2347706c5ada53ff5e09a313b6a83dd6.scope.
Jan 31 05:31:08 np0005603787 podman[252058]: 2026-01-31 10:31:08.503943105 +0000 UTC m=+0.035331539 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:31:08 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:31:08 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec71bfea427fe69d55815441a0b2ce9854b0e00fb3b329911a62e3c8c4a6ea0b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:31:08 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec71bfea427fe69d55815441a0b2ce9854b0e00fb3b329911a62e3c8c4a6ea0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:31:08 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec71bfea427fe69d55815441a0b2ce9854b0e00fb3b329911a62e3c8c4a6ea0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:31:08 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec71bfea427fe69d55815441a0b2ce9854b0e00fb3b329911a62e3c8c4a6ea0b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:31:08 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec71bfea427fe69d55815441a0b2ce9854b0e00fb3b329911a62e3c8c4a6ea0b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:31:08 np0005603787 podman[252058]: 2026-01-31 10:31:08.6349479 +0000 UTC m=+0.166336304 container init cabab7c04304cce5c6a10db0cdaace3d2347706c5ada53ff5e09a313b6a83dd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 05:31:08 np0005603787 podman[252058]: 2026-01-31 10:31:08.649540717 +0000 UTC m=+0.180929101 container start cabab7c04304cce5c6a10db0cdaace3d2347706c5ada53ff5e09a313b6a83dd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 05:31:08 np0005603787 podman[252058]: 2026-01-31 10:31:08.653899865 +0000 UTC m=+0.185288319 container attach cabab7c04304cce5c6a10db0cdaace3d2347706c5ada53ff5e09a313b6a83dd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 05:31:08 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:31:09 np0005603787 charming_neumann[252075]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:31:09 np0005603787 charming_neumann[252075]: --> All data devices are unavailable
Jan 31 05:31:09 np0005603787 nova_compute[238603]: 2026-01-31 10:31:09.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:31:09 np0005603787 systemd[1]: libpod-cabab7c04304cce5c6a10db0cdaace3d2347706c5ada53ff5e09a313b6a83dd6.scope: Deactivated successfully.
Jan 31 05:31:09 np0005603787 podman[252058]: 2026-01-31 10:31:09.1172394 +0000 UTC m=+0.648627764 container died cabab7c04304cce5c6a10db0cdaace3d2347706c5ada53ff5e09a313b6a83dd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_neumann, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 05:31:09 np0005603787 systemd[1]: var-lib-containers-storage-overlay-ec71bfea427fe69d55815441a0b2ce9854b0e00fb3b329911a62e3c8c4a6ea0b-merged.mount: Deactivated successfully.
Jan 31 05:31:09 np0005603787 podman[252058]: 2026-01-31 10:31:09.165456988 +0000 UTC m=+0.696845352 container remove cabab7c04304cce5c6a10db0cdaace3d2347706c5ada53ff5e09a313b6a83dd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_neumann, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 05:31:09 np0005603787 systemd[1]: libpod-conmon-cabab7c04304cce5c6a10db0cdaace3d2347706c5ada53ff5e09a313b6a83dd6.scope: Deactivated successfully.
Jan 31 05:31:09 np0005603787 podman[252169]: 2026-01-31 10:31:09.60401183 +0000 UTC m=+0.058224341 container create 7430d4a1fcd20ac2785e73d06ba3f7d4f7a1e5ed46e1d6649850b7d2bd7406df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:31:09 np0005603787 systemd[1]: Started libpod-conmon-7430d4a1fcd20ac2785e73d06ba3f7d4f7a1e5ed46e1d6649850b7d2bd7406df.scope.
Jan 31 05:31:09 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:31:09 np0005603787 podman[252169]: 2026-01-31 10:31:09.578886478 +0000 UTC m=+0.033099059 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:31:09 np0005603787 podman[252169]: 2026-01-31 10:31:09.68103374 +0000 UTC m=+0.135246311 container init 7430d4a1fcd20ac2785e73d06ba3f7d4f7a1e5ed46e1d6649850b7d2bd7406df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_agnesi, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:31:09 np0005603787 podman[252169]: 2026-01-31 10:31:09.690802375 +0000 UTC m=+0.145014896 container start 7430d4a1fcd20ac2785e73d06ba3f7d4f7a1e5ed46e1d6649850b7d2bd7406df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_agnesi, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 05:31:09 np0005603787 podman[252169]: 2026-01-31 10:31:09.694606369 +0000 UTC m=+0.148818940 container attach 7430d4a1fcd20ac2785e73d06ba3f7d4f7a1e5ed46e1d6649850b7d2bd7406df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:31:09 np0005603787 romantic_agnesi[252185]: 167 167
Jan 31 05:31:09 np0005603787 systemd[1]: libpod-7430d4a1fcd20ac2785e73d06ba3f7d4f7a1e5ed46e1d6649850b7d2bd7406df.scope: Deactivated successfully.
Jan 31 05:31:09 np0005603787 podman[252169]: 2026-01-31 10:31:09.696024107 +0000 UTC m=+0.150236628 container died 7430d4a1fcd20ac2785e73d06ba3f7d4f7a1e5ed46e1d6649850b7d2bd7406df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_agnesi, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:31:09 np0005603787 systemd[1]: var-lib-containers-storage-overlay-5e308cce3d032974c66126958edfe97095d704d6f61e5e5e04b04d5276bb61b7-merged.mount: Deactivated successfully.
Jan 31 05:31:09 np0005603787 podman[252169]: 2026-01-31 10:31:09.736043384 +0000 UTC m=+0.190255895 container remove 7430d4a1fcd20ac2785e73d06ba3f7d4f7a1e5ed46e1d6649850b7d2bd7406df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_agnesi, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 05:31:09 np0005603787 systemd[1]: libpod-conmon-7430d4a1fcd20ac2785e73d06ba3f7d4f7a1e5ed46e1d6649850b7d2bd7406df.scope: Deactivated successfully.
Jan 31 05:31:09 np0005603787 podman[252208]: 2026-01-31 10:31:09.890777673 +0000 UTC m=+0.049885475 container create 6d7d573241897d257e60310a1ccfa722692e9c2e439d6f51bbb790b0e0b91c08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_antonelli, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 05:31:09 np0005603787 systemd[1]: Started libpod-conmon-6d7d573241897d257e60310a1ccfa722692e9c2e439d6f51bbb790b0e0b91c08.scope.
Jan 31 05:31:09 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:31:09 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0950deeec5ef2e6291b19ad68cb0535fe67404e4de5889781893a21cdf1f605/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:31:09 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0950deeec5ef2e6291b19ad68cb0535fe67404e4de5889781893a21cdf1f605/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:31:09 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0950deeec5ef2e6291b19ad68cb0535fe67404e4de5889781893a21cdf1f605/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:31:09 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0950deeec5ef2e6291b19ad68cb0535fe67404e4de5889781893a21cdf1f605/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:31:09 np0005603787 podman[252208]: 2026-01-31 10:31:09.871360755 +0000 UTC m=+0.030468577 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:31:09 np0005603787 podman[252208]: 2026-01-31 10:31:09.98574594 +0000 UTC m=+0.144853742 container init 6d7d573241897d257e60310a1ccfa722692e9c2e439d6f51bbb790b0e0b91c08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_antonelli, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 05:31:09 np0005603787 podman[252208]: 2026-01-31 10:31:09.990590932 +0000 UTC m=+0.149698744 container start 6d7d573241897d257e60310a1ccfa722692e9c2e439d6f51bbb790b0e0b91c08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 05:31:09 np0005603787 podman[252208]: 2026-01-31 10:31:09.994659262 +0000 UTC m=+0.153767074 container attach 6d7d573241897d257e60310a1ccfa722692e9c2e439d6f51bbb790b0e0b91c08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 05:31:10 np0005603787 nova_compute[238603]: 2026-01-31 10:31:10.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:31:10 np0005603787 nova_compute[238603]: 2026-01-31 10:31:10.103 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:31:10 np0005603787 nova_compute[238603]: 2026-01-31 10:31:10.103 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:31:10 np0005603787 nova_compute[238603]: 2026-01-31 10:31:10.104 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]: {
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:    "0": [
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:        {
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:            "devices": [
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "/dev/loop3"
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:            ],
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:            "lv_name": "ceph_lv0",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:            "lv_size": "21470642176",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:            "name": "ceph_lv0",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:            "tags": {
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "ceph.cluster_name": "ceph",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "ceph.crush_device_class": "",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "ceph.encrypted": "0",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "ceph.objectstore": "bluestore",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "ceph.osd_id": "0",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "ceph.type": "block",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "ceph.vdo": "0",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "ceph.with_tpm": "0"
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:            },
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:            "type": "block",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:            "vg_name": "ceph_vg0"
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:        }
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:    ],
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:    "1": [
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:        {
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:            "devices": [
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "/dev/loop4"
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:            ],
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:            "lv_name": "ceph_lv1",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:            "lv_size": "21470642176",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:            "name": "ceph_lv1",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:            "tags": {
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "ceph.cluster_name": "ceph",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "ceph.crush_device_class": "",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "ceph.encrypted": "0",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "ceph.objectstore": "bluestore",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "ceph.osd_id": "1",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "ceph.type": "block",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "ceph.vdo": "0",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "ceph.with_tpm": "0"
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:            },
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:            "type": "block",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:            "vg_name": "ceph_vg1"
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:        }
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:    ],
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:    "2": [
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:        {
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:            "devices": [
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "/dev/loop5"
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:            ],
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:            "lv_name": "ceph_lv2",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:            "lv_size": "21470642176",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:            "name": "ceph_lv2",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:            "tags": {
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "ceph.cluster_name": "ceph",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "ceph.crush_device_class": "",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "ceph.encrypted": "0",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "ceph.objectstore": "bluestore",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "ceph.osd_id": "2",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "ceph.type": "block",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "ceph.vdo": "0",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:                "ceph.with_tpm": "0"
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:            },
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:            "type": "block",
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:            "vg_name": "ceph_vg2"
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:        }
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]:    ]
Jan 31 05:31:10 np0005603787 nice_antonelli[252225]: }
Jan 31 05:31:10 np0005603787 systemd[1]: libpod-6d7d573241897d257e60310a1ccfa722692e9c2e439d6f51bbb790b0e0b91c08.scope: Deactivated successfully.
Jan 31 05:31:10 np0005603787 podman[252208]: 2026-01-31 10:31:10.298351234 +0000 UTC m=+0.457459096 container died 6d7d573241897d257e60310a1ccfa722692e9c2e439d6f51bbb790b0e0b91c08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_antonelli, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:31:10 np0005603787 systemd[1]: var-lib-containers-storage-overlay-a0950deeec5ef2e6291b19ad68cb0535fe67404e4de5889781893a21cdf1f605-merged.mount: Deactivated successfully.
Jan 31 05:31:10 np0005603787 podman[252208]: 2026-01-31 10:31:10.348279219 +0000 UTC m=+0.507387031 container remove 6d7d573241897d257e60310a1ccfa722692e9c2e439d6f51bbb790b0e0b91c08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_antonelli, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:31:10 np0005603787 systemd[1]: libpod-conmon-6d7d573241897d257e60310a1ccfa722692e9c2e439d6f51bbb790b0e0b91c08.scope: Deactivated successfully.
Jan 31 05:31:10 np0005603787 podman[252308]: 2026-01-31 10:31:10.74515789 +0000 UTC m=+0.047276334 container create 45183c6bb61ffe34eaa9e4a6f5d620c2b6d330a185f0b03882026824caabd9af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_lalande, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 05:31:10 np0005603787 systemd[1]: Started libpod-conmon-45183c6bb61ffe34eaa9e4a6f5d620c2b6d330a185f0b03882026824caabd9af.scope.
Jan 31 05:31:10 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:31:10 np0005603787 podman[252308]: 2026-01-31 10:31:10.722243998 +0000 UTC m=+0.024362482 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:31:10 np0005603787 podman[252308]: 2026-01-31 10:31:10.823430384 +0000 UTC m=+0.125548828 container init 45183c6bb61ffe34eaa9e4a6f5d620c2b6d330a185f0b03882026824caabd9af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_lalande, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 05:31:10 np0005603787 podman[252308]: 2026-01-31 10:31:10.830384563 +0000 UTC m=+0.132502977 container start 45183c6bb61ffe34eaa9e4a6f5d620c2b6d330a185f0b03882026824caabd9af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:31:10 np0005603787 nervous_lalande[252324]: 167 167
Jan 31 05:31:10 np0005603787 systemd[1]: libpod-45183c6bb61ffe34eaa9e4a6f5d620c2b6d330a185f0b03882026824caabd9af.scope: Deactivated successfully.
Jan 31 05:31:10 np0005603787 podman[252308]: 2026-01-31 10:31:10.833493948 +0000 UTC m=+0.135612372 container attach 45183c6bb61ffe34eaa9e4a6f5d620c2b6d330a185f0b03882026824caabd9af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 05:31:10 np0005603787 conmon[252324]: conmon 45183c6bb61ffe34eaa9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-45183c6bb61ffe34eaa9e4a6f5d620c2b6d330a185f0b03882026824caabd9af.scope/container/memory.events
Jan 31 05:31:10 np0005603787 podman[252308]: 2026-01-31 10:31:10.834365411 +0000 UTC m=+0.136483835 container died 45183c6bb61ffe34eaa9e4a6f5d620c2b6d330a185f0b03882026824caabd9af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_lalande, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:31:10 np0005603787 systemd[1]: var-lib-containers-storage-overlay-2310981e814db414b355891270fd6be484761dac3d45cfe7c377cf21a93d010e-merged.mount: Deactivated successfully.
Jan 31 05:31:10 np0005603787 podman[252308]: 2026-01-31 10:31:10.862974588 +0000 UTC m=+0.165093002 container remove 45183c6bb61ffe34eaa9e4a6f5d620c2b6d330a185f0b03882026824caabd9af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_lalande, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2)
Jan 31 05:31:10 np0005603787 systemd[1]: libpod-conmon-45183c6bb61ffe34eaa9e4a6f5d620c2b6d330a185f0b03882026824caabd9af.scope: Deactivated successfully.
Jan 31 05:31:10 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:31:11 np0005603787 podman[252348]: 2026-01-31 10:31:11.040248868 +0000 UTC m=+0.058443577 container create 06b67f15932d54b7250d45e990efc64193fa54f0cda9d84827fe9e2bd7dc3d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:31:11 np0005603787 systemd[1]: Started libpod-conmon-06b67f15932d54b7250d45e990efc64193fa54f0cda9d84827fe9e2bd7dc3d08.scope.
Jan 31 05:31:11 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:31:11 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c9cd283ab03324c9f9938dbd9eac37fd2baca79bd16ec63ad714d6e08cd2983/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:31:11 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c9cd283ab03324c9f9938dbd9eac37fd2baca79bd16ec63ad714d6e08cd2983/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:31:11 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c9cd283ab03324c9f9938dbd9eac37fd2baca79bd16ec63ad714d6e08cd2983/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:31:11 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c9cd283ab03324c9f9938dbd9eac37fd2baca79bd16ec63ad714d6e08cd2983/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:31:11 np0005603787 podman[252348]: 2026-01-31 10:31:11.01487122 +0000 UTC m=+0.033065989 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:31:11 np0005603787 podman[252348]: 2026-01-31 10:31:11.131134525 +0000 UTC m=+0.149329294 container init 06b67f15932d54b7250d45e990efc64193fa54f0cda9d84827fe9e2bd7dc3d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_galileo, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:31:11 np0005603787 podman[252348]: 2026-01-31 10:31:11.147391517 +0000 UTC m=+0.165586226 container start 06b67f15932d54b7250d45e990efc64193fa54f0cda9d84827fe9e2bd7dc3d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_galileo, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 31 05:31:11 np0005603787 podman[252348]: 2026-01-31 10:31:11.151380105 +0000 UTC m=+0.169574844 container attach 06b67f15932d54b7250d45e990efc64193fa54f0cda9d84827fe9e2bd7dc3d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_galileo, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:31:11 np0005603787 lvm[252440]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:31:11 np0005603787 lvm[252440]: VG ceph_vg0 finished
Jan 31 05:31:11 np0005603787 lvm[252443]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:31:11 np0005603787 lvm[252443]: VG ceph_vg1 finished
Jan 31 05:31:11 np0005603787 lvm[252445]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:31:11 np0005603787 lvm[252445]: VG ceph_vg2 finished
Jan 31 05:31:11 np0005603787 friendly_galileo[252364]: {}
Jan 31 05:31:11 np0005603787 systemd[1]: libpod-06b67f15932d54b7250d45e990efc64193fa54f0cda9d84827fe9e2bd7dc3d08.scope: Deactivated successfully.
Jan 31 05:31:11 np0005603787 podman[252348]: 2026-01-31 10:31:11.886232128 +0000 UTC m=+0.904426827 container died 06b67f15932d54b7250d45e990efc64193fa54f0cda9d84827fe9e2bd7dc3d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 05:31:11 np0005603787 systemd[1]: libpod-06b67f15932d54b7250d45e990efc64193fa54f0cda9d84827fe9e2bd7dc3d08.scope: Consumed 1.116s CPU time.
Jan 31 05:31:11 np0005603787 systemd[1]: var-lib-containers-storage-overlay-3c9cd283ab03324c9f9938dbd9eac37fd2baca79bd16ec63ad714d6e08cd2983-merged.mount: Deactivated successfully.
Jan 31 05:31:11 np0005603787 podman[252348]: 2026-01-31 10:31:11.937596882 +0000 UTC m=+0.955791551 container remove 06b67f15932d54b7250d45e990efc64193fa54f0cda9d84827fe9e2bd7dc3d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_galileo, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:31:11 np0005603787 systemd[1]: libpod-conmon-06b67f15932d54b7250d45e990efc64193fa54f0cda9d84827fe9e2bd7dc3d08.scope: Deactivated successfully.
Jan 31 05:31:11 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:31:11 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:31:11 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:31:12 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:31:12 np0005603787 nova_compute[238603]: 2026-01-31 10:31:12.104 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:31:12 np0005603787 nova_compute[238603]: 2026-01-31 10:31:12.105 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 05:31:12 np0005603787 nova_compute[238603]: 2026-01-31 10:31:12.105 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 05:31:12 np0005603787 nova_compute[238603]: 2026-01-31 10:31:12.124 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 05:31:12 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:31:12 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:31:13 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1133: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:31:13 np0005603787 nova_compute[238603]: 2026-01-31 10:31:13.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:31:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:31:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:31:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:31:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:31:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:31:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:31:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:31:13 np0005603787 podman[252488]: 2026-01-31 10:31:13.877223842 +0000 UTC m=+0.086520988 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 31 05:31:13 np0005603787 podman[252487]: 2026-01-31 10:31:13.910056964 +0000 UTC m=+0.119240937 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 05:31:14 np0005603787 nova_compute[238603]: 2026-01-31 10:31:14.099 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:31:15 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:31:15 np0005603787 nova_compute[238603]: 2026-01-31 10:31:15.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:31:15 np0005603787 nova_compute[238603]: 2026-01-31 10:31:15.128 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:31:15 np0005603787 nova_compute[238603]: 2026-01-31 10:31:15.129 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:31:15 np0005603787 nova_compute[238603]: 2026-01-31 10:31:15.129 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:31:15 np0005603787 nova_compute[238603]: 2026-01-31 10:31:15.130 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 05:31:15 np0005603787 nova_compute[238603]: 2026-01-31 10:31:15.130 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:31:15 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:31:15 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2913957504' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:31:15 np0005603787 nova_compute[238603]: 2026-01-31 10:31:15.698 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:31:15 np0005603787 nova_compute[238603]: 2026-01-31 10:31:15.884 238607 WARNING nova.virt.libvirt.driver [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 05:31:15 np0005603787 nova_compute[238603]: 2026-01-31 10:31:15.886 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5023MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 05:31:15 np0005603787 nova_compute[238603]: 2026-01-31 10:31:15.886 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:31:15 np0005603787 nova_compute[238603]: 2026-01-31 10:31:15.886 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:31:15 np0005603787 nova_compute[238603]: 2026-01-31 10:31:15.947 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 05:31:15 np0005603787 nova_compute[238603]: 2026-01-31 10:31:15.948 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 05:31:15 np0005603787 nova_compute[238603]: 2026-01-31 10:31:15.974 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:31:16 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:31:16 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4218012618' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:31:16 np0005603787 nova_compute[238603]: 2026-01-31 10:31:16.465 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:31:16 np0005603787 nova_compute[238603]: 2026-01-31 10:31:16.473 238607 DEBUG nova.compute.provider_tree [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed in ProviderTree for provider: 207962d2-1ba9-4db2-8533-2a30e7131f3e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 05:31:16 np0005603787 nova_compute[238603]: 2026-01-31 10:31:16.492 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed for provider 207962d2-1ba9-4db2-8533-2a30e7131f3e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 05:31:16 np0005603787 nova_compute[238603]: 2026-01-31 10:31:16.495 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 05:31:16 np0005603787 nova_compute[238603]: 2026-01-31 10:31:16.496 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:31:17 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:31:18 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:31:19 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:31:20 np0005603787 nova_compute[238603]: 2026-01-31 10:31:20.497 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:31:21 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1137: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:31:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 05:31:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1061621818' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 05:31:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 05:31:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1061621818' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 05:31:23 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:31:23 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:31:25 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:31:27 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:31:28 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:31:29 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:31:31 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1142: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:31:33 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:31:33 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:31:35 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1144: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:31:37 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:31:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:31:37.074 154765 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:31:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:31:37.075 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:31:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:31:37.075 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:31:38 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:31:39 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:31:41 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1147: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:31:43 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1148: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:31:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:31:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_10:31:43
Jan 31 05:31:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:31:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] do_upmap
Jan 31 05:31:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'vms', 'volumes', 'images', 'default.rgw.log', 'backups', '.mgr', 'default.rgw.control']
Jan 31 05:31:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:31:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:31:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:31:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:31:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:31:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:31:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:31:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:31:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:31:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:31:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:31:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:31:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:31:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:31:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:31:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:31:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:31:44 np0005603787 podman[252578]: 2026-01-31 10:31:44.856962658 +0000 UTC m=+0.064436810 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 31 05:31:44 np0005603787 podman[252577]: 2026-01-31 10:31:44.896546862 +0000 UTC m=+0.111774144 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller)
Jan 31 05:31:45 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:31:47 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1150: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:31:48 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:31:49 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:31:51 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:31:53 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:31:53 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:31:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:31:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:31:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:31:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:31:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:31:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:31:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:31:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:31:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:31:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:31:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 2.450943614167069e-07 of space, bias 1.0, pg target 7.352830842501207e-05 quantized to 32 (current 32)
Jan 31 05:31:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:31:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.527403468629877e-06 of space, bias 4.0, pg target 0.0018328841623558524 quantized to 16 (current 16)
Jan 31 05:31:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:31:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:31:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:31:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:31:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:31:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 05:31:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:31:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:31:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:31:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:31:55 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1154: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:31:57 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:31:58 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:31:59 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:32:01 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:32:03 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1158: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:32:03 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:32:05 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:32:07 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:32:08 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:32:09 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1161: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:32:10 np0005603787 nova_compute[238603]: 2026-01-31 10:32:10.103 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:32:11 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:32:11 np0005603787 nova_compute[238603]: 2026-01-31 10:32:11.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:32:11 np0005603787 nova_compute[238603]: 2026-01-31 10:32:11.103 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:32:11 np0005603787 nova_compute[238603]: 2026-01-31 10:32:11.103 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:32:11 np0005603787 nova_compute[238603]: 2026-01-31 10:32:11.103 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 05:32:12 np0005603787 podman[252718]: 2026-01-31 10:32:12.696180464 +0000 UTC m=+0.107292483 container exec 1cb6a2ad0c52f65a03512fc45c5f9abf84541c639633c47899a99e7122aa7891 (image=quay.io/ceph/ceph:v20, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mon-compute-0, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 05:32:12 np0005603787 podman[252718]: 2026-01-31 10:32:12.826422509 +0000 UTC m=+0.237534518 container exec_died 1cb6a2ad0c52f65a03512fc45c5f9abf84541c639633c47899a99e7122aa7891 (image=quay.io/ceph/ceph:v20, name=ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 05:32:13 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:32:13 np0005603787 nova_compute[238603]: 2026-01-31 10:32:13.103 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:32:13 np0005603787 nova_compute[238603]: 2026-01-31 10:32:13.105 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 05:32:13 np0005603787 nova_compute[238603]: 2026-01-31 10:32:13.105 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 05:32:13 np0005603787 nova_compute[238603]: 2026-01-31 10:32:13.124 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 05:32:13 np0005603787 nova_compute[238603]: 2026-01-31 10:32:13.126 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:32:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:32:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:32:13 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:32:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:32:13 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:32:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:32:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:32:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:32:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:32:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:32:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:32:14 np0005603787 nova_compute[238603]: 2026-01-31 10:32:14.121 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:32:14 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:32:14 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:32:14 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:32:14 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:32:14 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:32:14 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:32:14 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:32:14 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:32:14 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:32:14 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:32:14 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:32:14 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:32:14 np0005603787 podman[253050]: 2026-01-31 10:32:14.577967264 +0000 UTC m=+0.068589022 container create 713b0fede5812b8a7511993d03da0c3f5d2dbfb6cc8aee5c1b21a83c1c3dba97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_diffie, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 31 05:32:14 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:32:14 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:32:14 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:32:14 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:32:14 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:32:14 np0005603787 systemd[1]: Started libpod-conmon-713b0fede5812b8a7511993d03da0c3f5d2dbfb6cc8aee5c1b21a83c1c3dba97.scope.
Jan 31 05:32:14 np0005603787 podman[253050]: 2026-01-31 10:32:14.54834308 +0000 UTC m=+0.038964848 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:32:14 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:32:14 np0005603787 podman[253050]: 2026-01-31 10:32:14.679185971 +0000 UTC m=+0.169807769 container init 713b0fede5812b8a7511993d03da0c3f5d2dbfb6cc8aee5c1b21a83c1c3dba97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 05:32:14 np0005603787 podman[253050]: 2026-01-31 10:32:14.688434553 +0000 UTC m=+0.179056311 container start 713b0fede5812b8a7511993d03da0c3f5d2dbfb6cc8aee5c1b21a83c1c3dba97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_diffie, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:32:14 np0005603787 podman[253050]: 2026-01-31 10:32:14.692524573 +0000 UTC m=+0.183146331 container attach 713b0fede5812b8a7511993d03da0c3f5d2dbfb6cc8aee5c1b21a83c1c3dba97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_diffie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 05:32:14 np0005603787 unruffled_diffie[253066]: 167 167
Jan 31 05:32:14 np0005603787 systemd[1]: libpod-713b0fede5812b8a7511993d03da0c3f5d2dbfb6cc8aee5c1b21a83c1c3dba97.scope: Deactivated successfully.
Jan 31 05:32:14 np0005603787 podman[253050]: 2026-01-31 10:32:14.694960389 +0000 UTC m=+0.185582137 container died 713b0fede5812b8a7511993d03da0c3f5d2dbfb6cc8aee5c1b21a83c1c3dba97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_diffie, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:32:14 np0005603787 systemd[1]: var-lib-containers-storage-overlay-642469dbedc0ab2bd0e7e06b9f09884be49c51ac15b10c63dcba588d394b11ad-merged.mount: Deactivated successfully.
Jan 31 05:32:14 np0005603787 podman[253050]: 2026-01-31 10:32:14.756314454 +0000 UTC m=+0.246936212 container remove 713b0fede5812b8a7511993d03da0c3f5d2dbfb6cc8aee5c1b21a83c1c3dba97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:32:14 np0005603787 systemd[1]: libpod-conmon-713b0fede5812b8a7511993d03da0c3f5d2dbfb6cc8aee5c1b21a83c1c3dba97.scope: Deactivated successfully.
Jan 31 05:32:14 np0005603787 podman[253090]: 2026-01-31 10:32:14.936200226 +0000 UTC m=+0.047415487 container create dcb10ae3ec3d2850b16d698a4144b06b6a9e003e23ebcde509b0e1d92ce83184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_brattain, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:32:14 np0005603787 systemd[1]: Started libpod-conmon-dcb10ae3ec3d2850b16d698a4144b06b6a9e003e23ebcde509b0e1d92ce83184.scope.
Jan 31 05:32:15 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:32:15 np0005603787 podman[253090]: 2026-01-31 10:32:14.913655994 +0000 UTC m=+0.024871235 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:32:15 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/146f11e28c604dacbae5c63feb8c3587daab26480d45f499fd9d618e55e3422f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:32:15 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/146f11e28c604dacbae5c63feb8c3587daab26480d45f499fd9d618e55e3422f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:32:15 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/146f11e28c604dacbae5c63feb8c3587daab26480d45f499fd9d618e55e3422f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:32:15 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/146f11e28c604dacbae5c63feb8c3587daab26480d45f499fd9d618e55e3422f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:32:15 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/146f11e28c604dacbae5c63feb8c3587daab26480d45f499fd9d618e55e3422f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:32:15 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:32:15 np0005603787 podman[253090]: 2026-01-31 10:32:15.054270581 +0000 UTC m=+0.165485912 container init dcb10ae3ec3d2850b16d698a4144b06b6a9e003e23ebcde509b0e1d92ce83184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_brattain, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Jan 31 05:32:15 np0005603787 podman[253090]: 2026-01-31 10:32:15.064327543 +0000 UTC m=+0.175542774 container start dcb10ae3ec3d2850b16d698a4144b06b6a9e003e23ebcde509b0e1d92ce83184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_brattain, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:32:15 np0005603787 podman[253090]: 2026-01-31 10:32:15.068386914 +0000 UTC m=+0.179602175 container attach dcb10ae3ec3d2850b16d698a4144b06b6a9e003e23ebcde509b0e1d92ce83184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Jan 31 05:32:15 np0005603787 podman[253105]: 2026-01-31 10:32:15.070069679 +0000 UTC m=+0.087083194 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 05:32:15 np0005603787 podman[253104]: 2026-01-31 10:32:15.088753236 +0000 UTC m=+0.105743350 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true)
Jan 31 05:32:15 np0005603787 nova_compute[238603]: 2026-01-31 10:32:15.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:32:15 np0005603787 nova_compute[238603]: 2026-01-31 10:32:15.128 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:32:15 np0005603787 nova_compute[238603]: 2026-01-31 10:32:15.129 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:32:15 np0005603787 nova_compute[238603]: 2026-01-31 10:32:15.129 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:32:15 np0005603787 nova_compute[238603]: 2026-01-31 10:32:15.129 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 05:32:15 np0005603787 nova_compute[238603]: 2026-01-31 10:32:15.129 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:32:15 np0005603787 brave_brattain[253118]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:32:15 np0005603787 brave_brattain[253118]: --> All data devices are unavailable
Jan 31 05:32:15 np0005603787 systemd[1]: libpod-dcb10ae3ec3d2850b16d698a4144b06b6a9e003e23ebcde509b0e1d92ce83184.scope: Deactivated successfully.
Jan 31 05:32:15 np0005603787 podman[253090]: 2026-01-31 10:32:15.526355573 +0000 UTC m=+0.637570824 container died dcb10ae3ec3d2850b16d698a4144b06b6a9e003e23ebcde509b0e1d92ce83184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_brattain, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 05:32:15 np0005603787 systemd[1]: var-lib-containers-storage-overlay-146f11e28c604dacbae5c63feb8c3587daab26480d45f499fd9d618e55e3422f-merged.mount: Deactivated successfully.
Jan 31 05:32:15 np0005603787 podman[253090]: 2026-01-31 10:32:15.571443506 +0000 UTC m=+0.682658737 container remove dcb10ae3ec3d2850b16d698a4144b06b6a9e003e23ebcde509b0e1d92ce83184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_brattain, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 05:32:15 np0005603787 systemd[1]: libpod-conmon-dcb10ae3ec3d2850b16d698a4144b06b6a9e003e23ebcde509b0e1d92ce83184.scope: Deactivated successfully.
Jan 31 05:32:15 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:32:15 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4215721659' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:32:15 np0005603787 nova_compute[238603]: 2026-01-31 10:32:15.679 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:32:15 np0005603787 nova_compute[238603]: 2026-01-31 10:32:15.857 238607 WARNING nova.virt.libvirt.driver [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 05:32:15 np0005603787 nova_compute[238603]: 2026-01-31 10:32:15.858 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5137MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 05:32:15 np0005603787 nova_compute[238603]: 2026-01-31 10:32:15.858 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:32:15 np0005603787 nova_compute[238603]: 2026-01-31 10:32:15.859 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:32:15 np0005603787 nova_compute[238603]: 2026-01-31 10:32:15.944 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 05:32:15 np0005603787 nova_compute[238603]: 2026-01-31 10:32:15.945 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 05:32:15 np0005603787 nova_compute[238603]: 2026-01-31 10:32:15.977 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:32:16 np0005603787 podman[253265]: 2026-01-31 10:32:16.06826675 +0000 UTC m=+0.057725648 container create f629c1ad8d482ab51ea53e090b69da24ec8011d7f7173aa97626595d4c04ee24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_keldysh, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:32:16 np0005603787 systemd[1]: Started libpod-conmon-f629c1ad8d482ab51ea53e090b69da24ec8011d7f7173aa97626595d4c04ee24.scope.
Jan 31 05:32:16 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:32:16 np0005603787 podman[253265]: 2026-01-31 10:32:16.043559239 +0000 UTC m=+0.033018177 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:32:16 np0005603787 podman[253265]: 2026-01-31 10:32:16.152247449 +0000 UTC m=+0.141706407 container init f629c1ad8d482ab51ea53e090b69da24ec8011d7f7173aa97626595d4c04ee24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 05:32:16 np0005603787 podman[253265]: 2026-01-31 10:32:16.163320469 +0000 UTC m=+0.152779367 container start f629c1ad8d482ab51ea53e090b69da24ec8011d7f7173aa97626595d4c04ee24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_keldysh, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:32:16 np0005603787 podman[253265]: 2026-01-31 10:32:16.166879225 +0000 UTC m=+0.156338203 container attach f629c1ad8d482ab51ea53e090b69da24ec8011d7f7173aa97626595d4c04ee24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_keldysh, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:32:16 np0005603787 priceless_keldysh[253300]: 167 167
Jan 31 05:32:16 np0005603787 systemd[1]: libpod-f629c1ad8d482ab51ea53e090b69da24ec8011d7f7173aa97626595d4c04ee24.scope: Deactivated successfully.
Jan 31 05:32:16 np0005603787 podman[253265]: 2026-01-31 10:32:16.171782759 +0000 UTC m=+0.161241667 container died f629c1ad8d482ab51ea53e090b69da24ec8011d7f7173aa97626595d4c04ee24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_keldysh, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:32:16 np0005603787 systemd[1]: var-lib-containers-storage-overlay-3555562f66f6974f9512ca2a26d0d15a47347c8e46b10d0f2ffffe9f68c97241-merged.mount: Deactivated successfully.
Jan 31 05:32:16 np0005603787 podman[253265]: 2026-01-31 10:32:16.219762601 +0000 UTC m=+0.209221499 container remove f629c1ad8d482ab51ea53e090b69da24ec8011d7f7173aa97626595d4c04ee24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_keldysh, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:32:16 np0005603787 systemd[1]: libpod-conmon-f629c1ad8d482ab51ea53e090b69da24ec8011d7f7173aa97626595d4c04ee24.scope: Deactivated successfully.
Jan 31 05:32:16 np0005603787 podman[253324]: 2026-01-31 10:32:16.379442834 +0000 UTC m=+0.043171962 container create 0c78d6e56b975942226d36697336ad19817a027fcf87bf0423a78956445092ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 05:32:16 np0005603787 systemd[1]: Started libpod-conmon-0c78d6e56b975942226d36697336ad19817a027fcf87bf0423a78956445092ea.scope.
Jan 31 05:32:16 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:32:16 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fbb652483bbb42b463e5f12b7ee405e4c5afb2d05e1caf44cd4585542dfb64e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:32:16 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fbb652483bbb42b463e5f12b7ee405e4c5afb2d05e1caf44cd4585542dfb64e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:32:16 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fbb652483bbb42b463e5f12b7ee405e4c5afb2d05e1caf44cd4585542dfb64e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:32:16 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fbb652483bbb42b463e5f12b7ee405e4c5afb2d05e1caf44cd4585542dfb64e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:32:16 np0005603787 podman[253324]: 2026-01-31 10:32:16.361481107 +0000 UTC m=+0.025210275 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:32:16 np0005603787 podman[253324]: 2026-01-31 10:32:16.468908633 +0000 UTC m=+0.132637791 container init 0c78d6e56b975942226d36697336ad19817a027fcf87bf0423a78956445092ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mayer, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:32:16 np0005603787 podman[253324]: 2026-01-31 10:32:16.477000193 +0000 UTC m=+0.140729341 container start 0c78d6e56b975942226d36697336ad19817a027fcf87bf0423a78956445092ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mayer, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:32:16 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:32:16 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/698421883' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:32:16 np0005603787 podman[253324]: 2026-01-31 10:32:16.482638385 +0000 UTC m=+0.146367583 container attach 0c78d6e56b975942226d36697336ad19817a027fcf87bf0423a78956445092ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:32:16 np0005603787 nova_compute[238603]: 2026-01-31 10:32:16.497 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:32:16 np0005603787 nova_compute[238603]: 2026-01-31 10:32:16.504 238607 DEBUG nova.compute.provider_tree [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed in ProviderTree for provider: 207962d2-1ba9-4db2-8533-2a30e7131f3e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 05:32:16 np0005603787 nova_compute[238603]: 2026-01-31 10:32:16.526 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed for provider 207962d2-1ba9-4db2-8533-2a30e7131f3e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 05:32:16 np0005603787 nova_compute[238603]: 2026-01-31 10:32:16.527 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 05:32:16 np0005603787 nova_compute[238603]: 2026-01-31 10:32:16.528 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:32:16 np0005603787 musing_mayer[253342]: {
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:    "0": [
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:        {
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:            "devices": [
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "/dev/loop3"
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:            ],
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:            "lv_name": "ceph_lv0",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:            "lv_size": "21470642176",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:            "name": "ceph_lv0",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:            "tags": {
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "ceph.cluster_name": "ceph",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "ceph.crush_device_class": "",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "ceph.encrypted": "0",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "ceph.objectstore": "bluestore",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "ceph.osd_id": "0",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "ceph.type": "block",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "ceph.vdo": "0",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "ceph.with_tpm": "0"
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:            },
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:            "type": "block",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:            "vg_name": "ceph_vg0"
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:        }
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:    ],
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:    "1": [
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:        {
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:            "devices": [
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "/dev/loop4"
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:            ],
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:            "lv_name": "ceph_lv1",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:            "lv_size": "21470642176",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:            "name": "ceph_lv1",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:            "tags": {
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "ceph.cluster_name": "ceph",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "ceph.crush_device_class": "",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "ceph.encrypted": "0",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "ceph.objectstore": "bluestore",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "ceph.osd_id": "1",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "ceph.type": "block",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "ceph.vdo": "0",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "ceph.with_tpm": "0"
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:            },
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:            "type": "block",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:            "vg_name": "ceph_vg1"
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:        }
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:    ],
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:    "2": [
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:        {
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:            "devices": [
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "/dev/loop5"
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:            ],
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:            "lv_name": "ceph_lv2",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:            "lv_size": "21470642176",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:            "name": "ceph_lv2",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:            "tags": {
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "ceph.cluster_name": "ceph",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "ceph.crush_device_class": "",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "ceph.encrypted": "0",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "ceph.objectstore": "bluestore",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "ceph.osd_id": "2",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "ceph.type": "block",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "ceph.vdo": "0",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:                "ceph.with_tpm": "0"
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:            },
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:            "type": "block",
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:            "vg_name": "ceph_vg2"
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:        }
Jan 31 05:32:16 np0005603787 musing_mayer[253342]:    ]
Jan 31 05:32:16 np0005603787 musing_mayer[253342]: }
Jan 31 05:32:16 np0005603787 systemd[1]: libpod-0c78d6e56b975942226d36697336ad19817a027fcf87bf0423a78956445092ea.scope: Deactivated successfully.
Jan 31 05:32:16 np0005603787 podman[253324]: 2026-01-31 10:32:16.789845832 +0000 UTC m=+0.453574960 container died 0c78d6e56b975942226d36697336ad19817a027fcf87bf0423a78956445092ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mayer, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 05:32:16 np0005603787 systemd[1]: var-lib-containers-storage-overlay-2fbb652483bbb42b463e5f12b7ee405e4c5afb2d05e1caf44cd4585542dfb64e-merged.mount: Deactivated successfully.
Jan 31 05:32:16 np0005603787 podman[253324]: 2026-01-31 10:32:16.847054095 +0000 UTC m=+0.510783213 container remove 0c78d6e56b975942226d36697336ad19817a027fcf87bf0423a78956445092ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 05:32:16 np0005603787 systemd[1]: libpod-conmon-0c78d6e56b975942226d36697336ad19817a027fcf87bf0423a78956445092ea.scope: Deactivated successfully.
Jan 31 05:32:17 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:32:17 np0005603787 podman[253427]: 2026-01-31 10:32:17.316875816 +0000 UTC m=+0.053994597 container create 1e119db3fdafb6e0e9c02f088d53d6ba2f4d1af07ca11c689dad452335b44d9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_driscoll, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:32:17 np0005603787 systemd[1]: Started libpod-conmon-1e119db3fdafb6e0e9c02f088d53d6ba2f4d1af07ca11c689dad452335b44d9c.scope.
Jan 31 05:32:17 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:32:17 np0005603787 podman[253427]: 2026-01-31 10:32:17.382024994 +0000 UTC m=+0.119143735 container init 1e119db3fdafb6e0e9c02f088d53d6ba2f4d1af07ca11c689dad452335b44d9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_driscoll, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 05:32:17 np0005603787 podman[253427]: 2026-01-31 10:32:17.388298815 +0000 UTC m=+0.125417556 container start 1e119db3fdafb6e0e9c02f088d53d6ba2f4d1af07ca11c689dad452335b44d9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_driscoll, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 05:32:17 np0005603787 podman[253427]: 2026-01-31 10:32:17.296034231 +0000 UTC m=+0.033153002 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:32:17 np0005603787 wizardly_driscoll[253443]: 167 167
Jan 31 05:32:17 np0005603787 systemd[1]: libpod-1e119db3fdafb6e0e9c02f088d53d6ba2f4d1af07ca11c689dad452335b44d9c.scope: Deactivated successfully.
Jan 31 05:32:17 np0005603787 podman[253427]: 2026-01-31 10:32:17.391803479 +0000 UTC m=+0.128922230 container attach 1e119db3fdafb6e0e9c02f088d53d6ba2f4d1af07ca11c689dad452335b44d9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:32:17 np0005603787 podman[253427]: 2026-01-31 10:32:17.392650993 +0000 UTC m=+0.129769734 container died 1e119db3fdafb6e0e9c02f088d53d6ba2f4d1af07ca11c689dad452335b44d9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_driscoll, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 05:32:17 np0005603787 systemd[1]: var-lib-containers-storage-overlay-098044a8f327f749efbf04d709dba88bd8d505e5d5a407bea4662b717bce9049-merged.mount: Deactivated successfully.
Jan 31 05:32:17 np0005603787 podman[253427]: 2026-01-31 10:32:17.43346798 +0000 UTC m=+0.170586731 container remove 1e119db3fdafb6e0e9c02f088d53d6ba2f4d1af07ca11c689dad452335b44d9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 05:32:17 np0005603787 systemd[1]: libpod-conmon-1e119db3fdafb6e0e9c02f088d53d6ba2f4d1af07ca11c689dad452335b44d9c.scope: Deactivated successfully.
Jan 31 05:32:17 np0005603787 podman[253466]: 2026-01-31 10:32:17.595738884 +0000 UTC m=+0.060751339 container create 3c09b0fe41b10fa6fa843258b3ce624973509ed6726caed50b6e63437204f430 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_newton, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 05:32:17 np0005603787 systemd[1]: Started libpod-conmon-3c09b0fe41b10fa6fa843258b3ce624973509ed6726caed50b6e63437204f430.scope.
Jan 31 05:32:17 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:32:17 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1506b6ad9ffa1bbf7271a44c32a4d8757ead1c694401955231e0ada6e5a7ff3f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:32:17 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1506b6ad9ffa1bbf7271a44c32a4d8757ead1c694401955231e0ada6e5a7ff3f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:32:17 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1506b6ad9ffa1bbf7271a44c32a4d8757ead1c694401955231e0ada6e5a7ff3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:32:17 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1506b6ad9ffa1bbf7271a44c32a4d8757ead1c694401955231e0ada6e5a7ff3f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:32:17 np0005603787 podman[253466]: 2026-01-31 10:32:17.566354207 +0000 UTC m=+0.031366722 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:32:17 np0005603787 podman[253466]: 2026-01-31 10:32:17.68439959 +0000 UTC m=+0.149412135 container init 3c09b0fe41b10fa6fa843258b3ce624973509ed6726caed50b6e63437204f430 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_newton, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 05:32:17 np0005603787 podman[253466]: 2026-01-31 10:32:17.69138813 +0000 UTC m=+0.156400565 container start 3c09b0fe41b10fa6fa843258b3ce624973509ed6726caed50b6e63437204f430 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_newton, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:32:17 np0005603787 podman[253466]: 2026-01-31 10:32:17.696422357 +0000 UTC m=+0.161434912 container attach 3c09b0fe41b10fa6fa843258b3ce624973509ed6726caed50b6e63437204f430 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:32:18 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:32:18 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Jan 31 05:32:18 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:32:18.154838) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 05:32:18 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Jan 31 05:32:18 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769855538154902, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1354, "num_deletes": 508, "total_data_size": 1687724, "memory_usage": 1722248, "flush_reason": "Manual Compaction"}
Jan 31 05:32:18 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Jan 31 05:32:18 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769855538167171, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1274903, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23140, "largest_seqno": 24493, "table_properties": {"data_size": 1269556, "index_size": 2231, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 15205, "raw_average_key_size": 18, "raw_value_size": 1256521, "raw_average_value_size": 1562, "num_data_blocks": 101, "num_entries": 804, "num_filter_entries": 804, "num_deletions": 508, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769855429, "oldest_key_time": 1769855429, "file_creation_time": 1769855538, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a9067dea-12fb-43c2-8d5c-dbf66227f0e8", "db_session_id": "EXKALWXQ4I64EWVUMKE5", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:32:18 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 12368 microseconds, and 3053 cpu microseconds.
Jan 31 05:32:18 np0005603787 ceph-mon[75160]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 05:32:18 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:32:18.167214) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1274903 bytes OK
Jan 31 05:32:18 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:32:18.167232) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Jan 31 05:32:18 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:32:18.171343) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Jan 31 05:32:18 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:32:18.171381) EVENT_LOG_v1 {"time_micros": 1769855538171373, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 05:32:18 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:32:18.171405) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 05:32:18 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1680570, prev total WAL file size 1680570, number of live WAL files 2.
Jan 31 05:32:18 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:32:18 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:32:18.172051) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353031' seq:72057594037927935, type:22 .. '6C6F676D00373533' seq:0, type:0; will stop at (end)
Jan 31 05:32:18 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 05:32:18 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1245KB)], [53(9246KB)]
Jan 31 05:32:18 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769855538172145, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 10743350, "oldest_snapshot_seqno": -1}
Jan 31 05:32:18 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4632 keys, 7667150 bytes, temperature: kUnknown
Jan 31 05:32:18 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769855538230033, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 7667150, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7635961, "index_size": 18535, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11589, "raw_key_size": 115716, "raw_average_key_size": 24, "raw_value_size": 7551907, "raw_average_value_size": 1630, "num_data_blocks": 771, "num_entries": 4632, "num_filter_entries": 4632, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769853439, "oldest_key_time": 0, "file_creation_time": 1769855538, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a9067dea-12fb-43c2-8d5c-dbf66227f0e8", "db_session_id": "EXKALWXQ4I64EWVUMKE5", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:32:18 np0005603787 ceph-mon[75160]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 05:32:18 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:32:18.230261) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 7667150 bytes
Jan 31 05:32:18 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:32:18.234538) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 185.4 rd, 132.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 9.0 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(14.4) write-amplify(6.0) OK, records in: 5632, records dropped: 1000 output_compression: NoCompression
Jan 31 05:32:18 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:32:18.234559) EVENT_LOG_v1 {"time_micros": 1769855538234550, "job": 28, "event": "compaction_finished", "compaction_time_micros": 57961, "compaction_time_cpu_micros": 15736, "output_level": 6, "num_output_files": 1, "total_output_size": 7667150, "num_input_records": 5632, "num_output_records": 4632, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 05:32:18 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:32:18 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769855538234885, "job": 28, "event": "table_file_deletion", "file_number": 55}
Jan 31 05:32:18 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:32:18 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769855538235985, "job": 28, "event": "table_file_deletion", "file_number": 53}
Jan 31 05:32:18 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:32:18.171940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:32:18 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:32:18.236120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:32:18 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:32:18.236129) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:32:18 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:32:18.236132) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:32:18 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:32:18.236135) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:32:18 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:32:18.236139) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:32:18 np0005603787 lvm[253559]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:32:18 np0005603787 lvm[253559]: VG ceph_vg0 finished
Jan 31 05:32:18 np0005603787 lvm[253562]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:32:18 np0005603787 lvm[253562]: VG ceph_vg1 finished
Jan 31 05:32:18 np0005603787 lvm[253564]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:32:18 np0005603787 lvm[253564]: VG ceph_vg2 finished
Jan 31 05:32:18 np0005603787 compassionate_newton[253483]: {}
Jan 31 05:32:18 np0005603787 systemd[1]: libpod-3c09b0fe41b10fa6fa843258b3ce624973509ed6726caed50b6e63437204f430.scope: Deactivated successfully.
Jan 31 05:32:18 np0005603787 systemd[1]: libpod-3c09b0fe41b10fa6fa843258b3ce624973509ed6726caed50b6e63437204f430.scope: Consumed 1.063s CPU time.
Jan 31 05:32:18 np0005603787 podman[253466]: 2026-01-31 10:32:18.459148067 +0000 UTC m=+0.924160532 container died 3c09b0fe41b10fa6fa843258b3ce624973509ed6726caed50b6e63437204f430 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_newton, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 05:32:18 np0005603787 systemd[1]: var-lib-containers-storage-overlay-1506b6ad9ffa1bbf7271a44c32a4d8757ead1c694401955231e0ada6e5a7ff3f-merged.mount: Deactivated successfully.
Jan 31 05:32:18 np0005603787 podman[253466]: 2026-01-31 10:32:18.732814774 +0000 UTC m=+1.197827249 container remove 3c09b0fe41b10fa6fa843258b3ce624973509ed6726caed50b6e63437204f430 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 05:32:18 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:32:18 np0005603787 systemd[1]: libpod-conmon-3c09b0fe41b10fa6fa843258b3ce624973509ed6726caed50b6e63437204f430.scope: Deactivated successfully.
Jan 31 05:32:18 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:32:18 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:32:18 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:32:19 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:32:19 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:32:19 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:32:20 np0005603787 nova_compute[238603]: 2026-01-31 10:32:20.523 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:32:21 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1167: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:32:21 np0005603787 nova_compute[238603]: 2026-01-31 10:32:21.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:32:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 05:32:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2816808998' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 05:32:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 05:32:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2816808998' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 05:32:23 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:32:23 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:32:25 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:32:27 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:32:28 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:32:29 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:32:31 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:32:33 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:32:33 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:32:35 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:32:37 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:32:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:32:37.076 154765 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:32:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:32:37.077 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:32:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:32:37.077 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:32:38 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:32:39 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1176: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:32:41 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:32:43 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:32:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:32:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_10:32:43
Jan 31 05:32:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:32:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] do_upmap
Jan 31 05:32:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', '.mgr', 'vms', 'default.rgw.log', 'volumes']
Jan 31 05:32:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:32:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:32:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:32:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:32:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:32:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:32:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:32:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:32:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:32:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:32:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:32:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:32:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:32:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:32:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:32:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:32:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:32:45 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:32:45 np0005603787 podman[253605]: 2026-01-31 10:32:45.859106912 +0000 UTC m=+0.067693838 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 05:32:45 np0005603787 podman[253604]: 2026-01-31 10:32:45.883421191 +0000 UTC m=+0.091880514 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 05:32:47 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1180: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:32:48 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:32:49 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:32:51 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:32:53 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1183: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:32:53 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:32:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:32:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:32:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:32:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:32:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:32:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:32:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:32:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:32:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:32:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:32:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 2.450943614167069e-07 of space, bias 1.0, pg target 7.352830842501207e-05 quantized to 32 (current 32)
Jan 31 05:32:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:32:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.527403468629877e-06 of space, bias 4.0, pg target 0.0018328841623558524 quantized to 16 (current 16)
Jan 31 05:32:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:32:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:32:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:32:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:32:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:32:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 05:32:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:32:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:32:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:32:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:32:55 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:32:57 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:32:58 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:32:59 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1186: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:33:01 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:33:03 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1188: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:33:03 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:33:05 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1189: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:33:07 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:33:08 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:33:09 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:33:11 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:33:12 np0005603787 nova_compute[238603]: 2026-01-31 10:33:12.103 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:33:12 np0005603787 nova_compute[238603]: 2026-01-31 10:33:12.104 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:33:12 np0005603787 nova_compute[238603]: 2026-01-31 10:33:12.104 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:33:12 np0005603787 nova_compute[238603]: 2026-01-31 10:33:12.104 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 05:33:13 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1193: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:33:13 np0005603787 nova_compute[238603]: 2026-01-31 10:33:13.103 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:33:13 np0005603787 nova_compute[238603]: 2026-01-31 10:33:13.104 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 05:33:13 np0005603787 nova_compute[238603]: 2026-01-31 10:33:13.104 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 05:33:13 np0005603787 nova_compute[238603]: 2026-01-31 10:33:13.127 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 05:33:13 np0005603787 nova_compute[238603]: 2026-01-31 10:33:13.128 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:33:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:33:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:33:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:33:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:33:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:33:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:33:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:33:14 np0005603787 nova_compute[238603]: 2026-01-31 10:33:14.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:33:15 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:33:15 np0005603787 nova_compute[238603]: 2026-01-31 10:33:15.099 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:33:16 np0005603787 podman[253649]: 2026-01-31 10:33:16.833971748 +0000 UTC m=+0.051825147 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 31 05:33:16 np0005603787 podman[253648]: 2026-01-31 10:33:16.851723369 +0000 UTC m=+0.073593407 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 05:33:17 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:33:17 np0005603787 nova_compute[238603]: 2026-01-31 10:33:17.103 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:33:17 np0005603787 nova_compute[238603]: 2026-01-31 10:33:17.222 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:33:17 np0005603787 nova_compute[238603]: 2026-01-31 10:33:17.223 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:33:17 np0005603787 nova_compute[238603]: 2026-01-31 10:33:17.223 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:33:17 np0005603787 nova_compute[238603]: 2026-01-31 10:33:17.223 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 05:33:17 np0005603787 nova_compute[238603]: 2026-01-31 10:33:17.224 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:33:17 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:33:17 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1036819872' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:33:17 np0005603787 nova_compute[238603]: 2026-01-31 10:33:17.764 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:33:17 np0005603787 nova_compute[238603]: 2026-01-31 10:33:17.939 238607 WARNING nova.virt.libvirt.driver [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 05:33:17 np0005603787 nova_compute[238603]: 2026-01-31 10:33:17.940 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5118MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 05:33:17 np0005603787 nova_compute[238603]: 2026-01-31 10:33:17.940 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:33:17 np0005603787 nova_compute[238603]: 2026-01-31 10:33:17.941 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:33:18 np0005603787 nova_compute[238603]: 2026-01-31 10:33:18.027 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 05:33:18 np0005603787 nova_compute[238603]: 2026-01-31 10:33:18.027 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 05:33:18 np0005603787 nova_compute[238603]: 2026-01-31 10:33:18.046 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:33:18 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:33:18 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:33:18 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/7386297' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:33:18 np0005603787 nova_compute[238603]: 2026-01-31 10:33:18.588 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:33:18 np0005603787 nova_compute[238603]: 2026-01-31 10:33:18.596 238607 DEBUG nova.compute.provider_tree [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed in ProviderTree for provider: 207962d2-1ba9-4db2-8533-2a30e7131f3e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 05:33:18 np0005603787 nova_compute[238603]: 2026-01-31 10:33:18.611 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed for provider 207962d2-1ba9-4db2-8533-2a30e7131f3e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 05:33:18 np0005603787 nova_compute[238603]: 2026-01-31 10:33:18.612 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 05:33:18 np0005603787 nova_compute[238603]: 2026-01-31 10:33:18.613 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:33:19 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:33:19 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:33:19 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:33:19 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:33:19 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:33:19 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:33:19 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:33:19 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:33:19 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:33:19 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:33:19 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:33:19 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:33:19 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:33:19 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:33:19 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:33:19 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:33:20 np0005603787 podman[253879]: 2026-01-31 10:33:20.009116409 +0000 UTC m=+0.039326129 container create 19c1a17f0a569604574ade4bf61ed701f72b6f91cb26d76696aefed50c88e37f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_perlman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 05:33:20 np0005603787 systemd[1]: Started libpod-conmon-19c1a17f0a569604574ade4bf61ed701f72b6f91cb26d76696aefed50c88e37f.scope.
Jan 31 05:33:20 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:33:20 np0005603787 podman[253879]: 2026-01-31 10:33:19.987449351 +0000 UTC m=+0.017659051 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:33:20 np0005603787 podman[253879]: 2026-01-31 10:33:20.091422253 +0000 UTC m=+0.121631953 container init 19c1a17f0a569604574ade4bf61ed701f72b6f91cb26d76696aefed50c88e37f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 05:33:20 np0005603787 podman[253879]: 2026-01-31 10:33:20.097428785 +0000 UTC m=+0.127638495 container start 19c1a17f0a569604574ade4bf61ed701f72b6f91cb26d76696aefed50c88e37f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:33:20 np0005603787 quirky_perlman[253896]: 167 167
Jan 31 05:33:20 np0005603787 systemd[1]: libpod-19c1a17f0a569604574ade4bf61ed701f72b6f91cb26d76696aefed50c88e37f.scope: Deactivated successfully.
Jan 31 05:33:20 np0005603787 podman[253879]: 2026-01-31 10:33:20.104649092 +0000 UTC m=+0.134858772 container attach 19c1a17f0a569604574ade4bf61ed701f72b6f91cb26d76696aefed50c88e37f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_perlman, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 05:33:20 np0005603787 podman[253879]: 2026-01-31 10:33:20.105140175 +0000 UTC m=+0.135349875 container died 19c1a17f0a569604574ade4bf61ed701f72b6f91cb26d76696aefed50c88e37f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_perlman, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:33:20 np0005603787 systemd[1]: var-lib-containers-storage-overlay-36751363ac6300b7bb105fe4c0ed8af2ddb45845534697881c340d9f3d01637f-merged.mount: Deactivated successfully.
Jan 31 05:33:20 np0005603787 podman[253879]: 2026-01-31 10:33:20.151796171 +0000 UTC m=+0.182005851 container remove 19c1a17f0a569604574ade4bf61ed701f72b6f91cb26d76696aefed50c88e37f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_perlman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:33:20 np0005603787 systemd[1]: libpod-conmon-19c1a17f0a569604574ade4bf61ed701f72b6f91cb26d76696aefed50c88e37f.scope: Deactivated successfully.
Jan 31 05:33:20 np0005603787 podman[253920]: 2026-01-31 10:33:20.265160408 +0000 UTC m=+0.033303345 container create 018c7336f2da8e1984d208dce39cd277a44f128e5aafd49fc624c850a35740fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_snyder, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 05:33:20 np0005603787 systemd[1]: Started libpod-conmon-018c7336f2da8e1984d208dce39cd277a44f128e5aafd49fc624c850a35740fb.scope.
Jan 31 05:33:20 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:33:20 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88079aef0928cded75b5c7a6d9c2708a7dc1ba66f2ff5a568f37d859ef7d46eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:33:20 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88079aef0928cded75b5c7a6d9c2708a7dc1ba66f2ff5a568f37d859ef7d46eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:33:20 np0005603787 podman[253920]: 2026-01-31 10:33:20.25050748 +0000 UTC m=+0.018650437 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:33:20 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88079aef0928cded75b5c7a6d9c2708a7dc1ba66f2ff5a568f37d859ef7d46eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:33:20 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88079aef0928cded75b5c7a6d9c2708a7dc1ba66f2ff5a568f37d859ef7d46eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:33:20 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88079aef0928cded75b5c7a6d9c2708a7dc1ba66f2ff5a568f37d859ef7d46eb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:33:20 np0005603787 podman[253920]: 2026-01-31 10:33:20.364491493 +0000 UTC m=+0.132634510 container init 018c7336f2da8e1984d208dce39cd277a44f128e5aafd49fc624c850a35740fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 05:33:20 np0005603787 podman[253920]: 2026-01-31 10:33:20.374981689 +0000 UTC m=+0.143124666 container start 018c7336f2da8e1984d208dce39cd277a44f128e5aafd49fc624c850a35740fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 05:33:20 np0005603787 podman[253920]: 2026-01-31 10:33:20.37872364 +0000 UTC m=+0.146866607 container attach 018c7336f2da8e1984d208dce39cd277a44f128e5aafd49fc624c850a35740fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_snyder, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 05:33:20 np0005603787 admiring_snyder[253937]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:33:20 np0005603787 admiring_snyder[253937]: --> All data devices are unavailable
Jan 31 05:33:20 np0005603787 systemd[1]: libpod-018c7336f2da8e1984d208dce39cd277a44f128e5aafd49fc624c850a35740fb.scope: Deactivated successfully.
Jan 31 05:33:20 np0005603787 podman[253920]: 2026-01-31 10:33:20.83527816 +0000 UTC m=+0.603421087 container died 018c7336f2da8e1984d208dce39cd277a44f128e5aafd49fc624c850a35740fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 05:33:20 np0005603787 systemd[1]: var-lib-containers-storage-overlay-88079aef0928cded75b5c7a6d9c2708a7dc1ba66f2ff5a568f37d859ef7d46eb-merged.mount: Deactivated successfully.
Jan 31 05:33:20 np0005603787 podman[253920]: 2026-01-31 10:33:20.888794313 +0000 UTC m=+0.656937260 container remove 018c7336f2da8e1984d208dce39cd277a44f128e5aafd49fc624c850a35740fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 05:33:20 np0005603787 systemd[1]: libpod-conmon-018c7336f2da8e1984d208dce39cd277a44f128e5aafd49fc624c850a35740fb.scope: Deactivated successfully.
Jan 31 05:33:21 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:33:21 np0005603787 podman[254035]: 2026-01-31 10:33:21.321262839 +0000 UTC m=+0.051568049 container create a63cf90a369f3e9e99dbdb953de8beff02fd8aac2160a1bec5110f71928577fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_golick, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 05:33:21 np0005603787 systemd[1]: Started libpod-conmon-a63cf90a369f3e9e99dbdb953de8beff02fd8aac2160a1bec5110f71928577fc.scope.
Jan 31 05:33:21 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:33:21 np0005603787 podman[254035]: 2026-01-31 10:33:21.299359175 +0000 UTC m=+0.029664425 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:33:21 np0005603787 podman[254035]: 2026-01-31 10:33:21.406142583 +0000 UTC m=+0.136447773 container init a63cf90a369f3e9e99dbdb953de8beff02fd8aac2160a1bec5110f71928577fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_golick, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 05:33:21 np0005603787 podman[254035]: 2026-01-31 10:33:21.41449139 +0000 UTC m=+0.144796590 container start a63cf90a369f3e9e99dbdb953de8beff02fd8aac2160a1bec5110f71928577fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 05:33:21 np0005603787 lucid_golick[254052]: 167 167
Jan 31 05:33:21 np0005603787 systemd[1]: libpod-a63cf90a369f3e9e99dbdb953de8beff02fd8aac2160a1bec5110f71928577fc.scope: Deactivated successfully.
Jan 31 05:33:21 np0005603787 podman[254035]: 2026-01-31 10:33:21.429014734 +0000 UTC m=+0.159319914 container attach a63cf90a369f3e9e99dbdb953de8beff02fd8aac2160a1bec5110f71928577fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_golick, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 05:33:21 np0005603787 podman[254035]: 2026-01-31 10:33:21.43068408 +0000 UTC m=+0.160989260 container died a63cf90a369f3e9e99dbdb953de8beff02fd8aac2160a1bec5110f71928577fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_golick, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:33:21 np0005603787 systemd[1]: var-lib-containers-storage-overlay-48c9ee3fb911e8f7e717f5a7f3ea1e7e44c3dfd9d5f94b50d6b9e894fd9cbf25-merged.mount: Deactivated successfully.
Jan 31 05:33:21 np0005603787 podman[254035]: 2026-01-31 10:33:21.509226791 +0000 UTC m=+0.239531961 container remove a63cf90a369f3e9e99dbdb953de8beff02fd8aac2160a1bec5110f71928577fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:33:21 np0005603787 systemd[1]: libpod-conmon-a63cf90a369f3e9e99dbdb953de8beff02fd8aac2160a1bec5110f71928577fc.scope: Deactivated successfully.
Jan 31 05:33:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 05:33:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3804480223' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 05:33:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 05:33:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3804480223' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 05:33:21 np0005603787 podman[254078]: 2026-01-31 10:33:21.672406189 +0000 UTC m=+0.048552108 container create 0e9e02ac39285f1eea21a8c7bc9ae493c94012f1890b115737027d8c46b8d29b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:33:21 np0005603787 systemd[1]: Started libpod-conmon-0e9e02ac39285f1eea21a8c7bc9ae493c94012f1890b115737027d8c46b8d29b.scope.
Jan 31 05:33:21 np0005603787 podman[254078]: 2026-01-31 10:33:21.646729553 +0000 UTC m=+0.022875492 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:33:21 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:33:21 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15bbe2b134dc6cabb66a3f578016032b92279293fba3a2ce9e824a62f008e3f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:33:21 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15bbe2b134dc6cabb66a3f578016032b92279293fba3a2ce9e824a62f008e3f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:33:21 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15bbe2b134dc6cabb66a3f578016032b92279293fba3a2ce9e824a62f008e3f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:33:21 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15bbe2b134dc6cabb66a3f578016032b92279293fba3a2ce9e824a62f008e3f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:33:21 np0005603787 podman[254078]: 2026-01-31 10:33:21.785768626 +0000 UTC m=+0.161914575 container init 0e9e02ac39285f1eea21a8c7bc9ae493c94012f1890b115737027d8c46b8d29b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_allen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 05:33:21 np0005603787 podman[254078]: 2026-01-31 10:33:21.791686336 +0000 UTC m=+0.167832265 container start 0e9e02ac39285f1eea21a8c7bc9ae493c94012f1890b115737027d8c46b8d29b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:33:21 np0005603787 podman[254078]: 2026-01-31 10:33:21.80470099 +0000 UTC m=+0.180846929 container attach 0e9e02ac39285f1eea21a8c7bc9ae493c94012f1890b115737027d8c46b8d29b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_allen, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:33:22 np0005603787 elastic_allen[254094]: {
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:    "0": [
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:        {
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:            "devices": [
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "/dev/loop3"
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:            ],
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:            "lv_name": "ceph_lv0",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:            "lv_size": "21470642176",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:            "name": "ceph_lv0",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:            "tags": {
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "ceph.cluster_name": "ceph",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "ceph.crush_device_class": "",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "ceph.encrypted": "0",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "ceph.objectstore": "bluestore",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "ceph.osd_id": "0",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "ceph.type": "block",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "ceph.vdo": "0",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "ceph.with_tpm": "0"
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:            },
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:            "type": "block",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:            "vg_name": "ceph_vg0"
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:        }
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:    ],
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:    "1": [
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:        {
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:            "devices": [
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "/dev/loop4"
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:            ],
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:            "lv_name": "ceph_lv1",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:            "lv_size": "21470642176",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:            "name": "ceph_lv1",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:            "tags": {
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "ceph.cluster_name": "ceph",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "ceph.crush_device_class": "",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "ceph.encrypted": "0",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "ceph.objectstore": "bluestore",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "ceph.osd_id": "1",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "ceph.type": "block",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "ceph.vdo": "0",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "ceph.with_tpm": "0"
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:            },
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:            "type": "block",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:            "vg_name": "ceph_vg1"
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:        }
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:    ],
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:    "2": [
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:        {
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:            "devices": [
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "/dev/loop5"
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:            ],
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:            "lv_name": "ceph_lv2",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:            "lv_size": "21470642176",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:            "name": "ceph_lv2",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:            "tags": {
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "ceph.cluster_name": "ceph",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "ceph.crush_device_class": "",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "ceph.encrypted": "0",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "ceph.objectstore": "bluestore",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "ceph.osd_id": "2",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "ceph.type": "block",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "ceph.vdo": "0",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:                "ceph.with_tpm": "0"
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:            },
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:            "type": "block",
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:            "vg_name": "ceph_vg2"
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:        }
Jan 31 05:33:22 np0005603787 elastic_allen[254094]:    ]
Jan 31 05:33:22 np0005603787 elastic_allen[254094]: }
Jan 31 05:33:22 np0005603787 systemd[1]: libpod-0e9e02ac39285f1eea21a8c7bc9ae493c94012f1890b115737027d8c46b8d29b.scope: Deactivated successfully.
Jan 31 05:33:22 np0005603787 podman[254078]: 2026-01-31 10:33:22.10022999 +0000 UTC m=+0.476375979 container died 0e9e02ac39285f1eea21a8c7bc9ae493c94012f1890b115737027d8c46b8d29b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 05:33:22 np0005603787 systemd[1]: var-lib-containers-storage-overlay-15bbe2b134dc6cabb66a3f578016032b92279293fba3a2ce9e824a62f008e3f5-merged.mount: Deactivated successfully.
Jan 31 05:33:22 np0005603787 podman[254078]: 2026-01-31 10:33:22.15473745 +0000 UTC m=+0.530883379 container remove 0e9e02ac39285f1eea21a8c7bc9ae493c94012f1890b115737027d8c46b8d29b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:33:22 np0005603787 systemd[1]: libpod-conmon-0e9e02ac39285f1eea21a8c7bc9ae493c94012f1890b115737027d8c46b8d29b.scope: Deactivated successfully.
Jan 31 05:33:22 np0005603787 podman[254177]: 2026-01-31 10:33:22.535980646 +0000 UTC m=+0.053602395 container create 5d88a217c875e32f263d6b706bcc0afbb34d37be5235c3f2d77c4435ce9148bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_heyrovsky, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:33:22 np0005603787 systemd[1]: Started libpod-conmon-5d88a217c875e32f263d6b706bcc0afbb34d37be5235c3f2d77c4435ce9148bf.scope.
Jan 31 05:33:22 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:33:22 np0005603787 podman[254177]: 2026-01-31 10:33:22.513810194 +0000 UTC m=+0.031431993 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:33:22 np0005603787 podman[254177]: 2026-01-31 10:33:22.606600002 +0000 UTC m=+0.124221771 container init 5d88a217c875e32f263d6b706bcc0afbb34d37be5235c3f2d77c4435ce9148bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_heyrovsky, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:33:22 np0005603787 podman[254177]: 2026-01-31 10:33:22.615112584 +0000 UTC m=+0.132734343 container start 5d88a217c875e32f263d6b706bcc0afbb34d37be5235c3f2d77c4435ce9148bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_heyrovsky, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:33:22 np0005603787 condescending_heyrovsky[254193]: 167 167
Jan 31 05:33:22 np0005603787 systemd[1]: libpod-5d88a217c875e32f263d6b706bcc0afbb34d37be5235c3f2d77c4435ce9148bf.scope: Deactivated successfully.
Jan 31 05:33:22 np0005603787 podman[254177]: 2026-01-31 10:33:22.61904019 +0000 UTC m=+0.136661939 container attach 5d88a217c875e32f263d6b706bcc0afbb34d37be5235c3f2d77c4435ce9148bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_heyrovsky, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:33:22 np0005603787 conmon[254193]: conmon 5d88a217c875e32f263d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5d88a217c875e32f263d6b706bcc0afbb34d37be5235c3f2d77c4435ce9148bf.scope/container/memory.events
Jan 31 05:33:22 np0005603787 podman[254177]: 2026-01-31 10:33:22.620663195 +0000 UTC m=+0.138284974 container died 5d88a217c875e32f263d6b706bcc0afbb34d37be5235c3f2d77c4435ce9148bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:33:22 np0005603787 systemd[1]: var-lib-containers-storage-overlay-773cd0203ac221ab524ab0f5660fca37dba949fa31b6e41eef457891324ee6a3-merged.mount: Deactivated successfully.
Jan 31 05:33:22 np0005603787 podman[254177]: 2026-01-31 10:33:22.660929167 +0000 UTC m=+0.178550906 container remove 5d88a217c875e32f263d6b706bcc0afbb34d37be5235c3f2d77c4435ce9148bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_heyrovsky, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:33:22 np0005603787 systemd[1]: libpod-conmon-5d88a217c875e32f263d6b706bcc0afbb34d37be5235c3f2d77c4435ce9148bf.scope: Deactivated successfully.
Jan 31 05:33:22 np0005603787 podman[254218]: 2026-01-31 10:33:22.812860681 +0000 UTC m=+0.036663536 container create 87f08c864cf124b141509a973aee7a54d363d15cde19dabe2c01f1e24e6b292b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_elgamal, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:33:22 np0005603787 systemd[1]: Started libpod-conmon-87f08c864cf124b141509a973aee7a54d363d15cde19dabe2c01f1e24e6b292b.scope.
Jan 31 05:33:22 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:33:22 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c90fd03e33e5dc86ab1543161c312d5bbd254cf672b57818e4515d936a975807/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:33:22 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c90fd03e33e5dc86ab1543161c312d5bbd254cf672b57818e4515d936a975807/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:33:22 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c90fd03e33e5dc86ab1543161c312d5bbd254cf672b57818e4515d936a975807/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:33:22 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c90fd03e33e5dc86ab1543161c312d5bbd254cf672b57818e4515d936a975807/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:33:22 np0005603787 podman[254218]: 2026-01-31 10:33:22.798439479 +0000 UTC m=+0.022242354 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:33:22 np0005603787 podman[254218]: 2026-01-31 10:33:22.899856502 +0000 UTC m=+0.123659387 container init 87f08c864cf124b141509a973aee7a54d363d15cde19dabe2c01f1e24e6b292b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_elgamal, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 05:33:22 np0005603787 podman[254218]: 2026-01-31 10:33:22.904864797 +0000 UTC m=+0.128667652 container start 87f08c864cf124b141509a973aee7a54d363d15cde19dabe2c01f1e24e6b292b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_elgamal, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 05:33:22 np0005603787 podman[254218]: 2026-01-31 10:33:22.909541004 +0000 UTC m=+0.133343889 container attach 87f08c864cf124b141509a973aee7a54d363d15cde19dabe2c01f1e24e6b292b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 05:33:23 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:33:23 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:33:23 np0005603787 lvm[254313]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:33:23 np0005603787 lvm[254313]: VG ceph_vg0 finished
Jan 31 05:33:23 np0005603787 lvm[254314]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:33:23 np0005603787 lvm[254314]: VG ceph_vg1 finished
Jan 31 05:33:23 np0005603787 lvm[254316]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:33:23 np0005603787 lvm[254316]: VG ceph_vg2 finished
Jan 31 05:33:23 np0005603787 dazzling_elgamal[254235]: {}
Jan 31 05:33:23 np0005603787 systemd[1]: libpod-87f08c864cf124b141509a973aee7a54d363d15cde19dabe2c01f1e24e6b292b.scope: Deactivated successfully.
Jan 31 05:33:23 np0005603787 systemd[1]: libpod-87f08c864cf124b141509a973aee7a54d363d15cde19dabe2c01f1e24e6b292b.scope: Consumed 1.118s CPU time.
Jan 31 05:33:23 np0005603787 podman[254218]: 2026-01-31 10:33:23.687459026 +0000 UTC m=+0.911261921 container died 87f08c864cf124b141509a973aee7a54d363d15cde19dabe2c01f1e24e6b292b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:33:23 np0005603787 systemd[1]: var-lib-containers-storage-overlay-c90fd03e33e5dc86ab1543161c312d5bbd254cf672b57818e4515d936a975807-merged.mount: Deactivated successfully.
Jan 31 05:33:23 np0005603787 podman[254218]: 2026-01-31 10:33:23.756183842 +0000 UTC m=+0.979986697 container remove 87f08c864cf124b141509a973aee7a54d363d15cde19dabe2c01f1e24e6b292b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_elgamal, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:33:23 np0005603787 systemd[1]: libpod-conmon-87f08c864cf124b141509a973aee7a54d363d15cde19dabe2c01f1e24e6b292b.scope: Deactivated successfully.
Jan 31 05:33:23 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:33:23 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:33:23 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:33:23 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:33:24 np0005603787 nova_compute[238603]: 2026-01-31 10:33:24.612 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:33:24 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:33:24 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:33:25 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:33:27 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:33:28 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:33:29 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:33:31 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:33:33 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:33:33 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:33:35 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1204: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:33:36 np0005603787 systemd-logind[786]: New session 52 of user zuul.
Jan 31 05:33:36 np0005603787 systemd[1]: Started Session 52 of User zuul.
Jan 31 05:33:37 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:33:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:33:37.078 154765 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:33:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:33:37.079 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:33:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:33:37.079 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:33:38 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:33:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:33:38.874 154765 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:08:49', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ce:80:fe:bf:9d:90'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 05:33:38 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:33:38.876 154765 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 05:33:39 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1206: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:33:41 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:33:42 np0005603787 systemd[1]: session-52.scope: Deactivated successfully.
Jan 31 05:33:42 np0005603787 systemd-logind[786]: Session 52 logged out. Waiting for processes to exit.
Jan 31 05:33:42 np0005603787 systemd-logind[786]: Removed session 52.
Jan 31 05:33:43 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1208: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:33:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_10:33:43
Jan 31 05:33:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:33:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] do_upmap
Jan 31 05:33:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'volumes', 'default.rgw.control', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', 'images']
Jan 31 05:33:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:33:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:33:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:33:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:33:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:33:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:33:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:33:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:33:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:33:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:33:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:33:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:33:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:33:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:33:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:33:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:33:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:33:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:33:45 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1209: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:33:47 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1210: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:33:47 np0005603787 podman[254616]: 2026-01-31 10:33:47.847972143 +0000 UTC m=+0.061059027 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 31 05:33:47 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:33:47.879 154765 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ef41023c-ae05-4c9a-b1cb-d6bd86d05fb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 05:33:47 np0005603787 podman[254615]: 2026-01-31 10:33:47.919022922 +0000 UTC m=+0.128699744 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 05:33:48 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:33:49 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:33:51 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1212: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:33:53 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:33:53 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:33:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:33:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:33:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:33:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:33:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:33:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:33:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:33:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:33:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:33:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:33:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 2.450943614167069e-07 of space, bias 1.0, pg target 7.352830842501207e-05 quantized to 32 (current 32)
Jan 31 05:33:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:33:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.527403468629877e-06 of space, bias 4.0, pg target 0.0018328841623558524 quantized to 16 (current 16)
Jan 31 05:33:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:33:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:33:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:33:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:33:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:33:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 05:33:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:33:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:33:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:33:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:33:55 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1214: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:33:57 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1215: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:33:58 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:33:59 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1216: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:34:01 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:34:03 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1218: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:34:03 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:34:05 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1219: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:34:07 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:34:08 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:34:09 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1221: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:34:10 np0005603787 nova_compute[238603]: 2026-01-31 10:34:10.103 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:34:11 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1222: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:34:13 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:34:13 np0005603787 nova_compute[238603]: 2026-01-31 10:34:13.139 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:34:13 np0005603787 nova_compute[238603]: 2026-01-31 10:34:13.139 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 05:34:13 np0005603787 nova_compute[238603]: 2026-01-31 10:34:13.140 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 05:34:13 np0005603787 nova_compute[238603]: 2026-01-31 10:34:13.160 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 05:34:13 np0005603787 nova_compute[238603]: 2026-01-31 10:34:13.161 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:34:13 np0005603787 nova_compute[238603]: 2026-01-31 10:34:13.161 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:34:13 np0005603787 nova_compute[238603]: 2026-01-31 10:34:13.161 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 05:34:13 np0005603787 nova_compute[238603]: 2026-01-31 10:34:13.161 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:34:13 np0005603787 nova_compute[238603]: 2026-01-31 10:34:13.162 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 05:34:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:34:13 np0005603787 nova_compute[238603]: 2026-01-31 10:34:13.182 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 05:34:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:34:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:34:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:34:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:34:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:34:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:34:14 np0005603787 nova_compute[238603]: 2026-01-31 10:34:14.123 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:34:14 np0005603787 nova_compute[238603]: 2026-01-31 10:34:14.124 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:34:15 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:34:15 np0005603787 nova_compute[238603]: 2026-01-31 10:34:15.103 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:34:16 np0005603787 nova_compute[238603]: 2026-01-31 10:34:16.098 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:34:17 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1225: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:34:18 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:34:18 np0005603787 podman[254656]: 2026-01-31 10:34:18.857638031 +0000 UTC m=+0.064143782 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 31 05:34:18 np0005603787 podman[254655]: 2026-01-31 10:34:18.889959268 +0000 UTC m=+0.102452771 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 31 05:34:19 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1226: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:34:19 np0005603787 nova_compute[238603]: 2026-01-31 10:34:19.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:34:19 np0005603787 nova_compute[238603]: 2026-01-31 10:34:19.142 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:34:19 np0005603787 nova_compute[238603]: 2026-01-31 10:34:19.143 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:34:19 np0005603787 nova_compute[238603]: 2026-01-31 10:34:19.143 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:34:19 np0005603787 nova_compute[238603]: 2026-01-31 10:34:19.144 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 05:34:19 np0005603787 nova_compute[238603]: 2026-01-31 10:34:19.144 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:34:19 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:34:19 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/962441802' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:34:19 np0005603787 nova_compute[238603]: 2026-01-31 10:34:19.668 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:34:19 np0005603787 nova_compute[238603]: 2026-01-31 10:34:19.828 238607 WARNING nova.virt.libvirt.driver [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 05:34:19 np0005603787 nova_compute[238603]: 2026-01-31 10:34:19.829 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5119MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 05:34:19 np0005603787 nova_compute[238603]: 2026-01-31 10:34:19.830 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:34:19 np0005603787 nova_compute[238603]: 2026-01-31 10:34:19.830 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:34:20 np0005603787 nova_compute[238603]: 2026-01-31 10:34:20.017 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 05:34:20 np0005603787 nova_compute[238603]: 2026-01-31 10:34:20.017 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 05:34:20 np0005603787 nova_compute[238603]: 2026-01-31 10:34:20.146 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Refreshing inventories for resource provider 207962d2-1ba9-4db2-8533-2a30e7131f3e _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 05:34:20 np0005603787 nova_compute[238603]: 2026-01-31 10:34:20.244 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Updating ProviderTree inventory for provider 207962d2-1ba9-4db2-8533-2a30e7131f3e from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 05:34:20 np0005603787 nova_compute[238603]: 2026-01-31 10:34:20.245 238607 DEBUG nova.compute.provider_tree [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Updating inventory in ProviderTree for provider 207962d2-1ba9-4db2-8533-2a30e7131f3e with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 05:34:20 np0005603787 nova_compute[238603]: 2026-01-31 10:34:20.261 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Refreshing aggregate associations for resource provider 207962d2-1ba9-4db2-8533-2a30e7131f3e, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 05:34:20 np0005603787 nova_compute[238603]: 2026-01-31 10:34:20.287 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Refreshing trait associations for resource provider 207962d2-1ba9-4db2-8533-2a30e7131f3e, traits: COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SVM,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AESNI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_FMA3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE41,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_RESCUE_BFV,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_MMX,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NODE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_SHA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 05:34:20 np0005603787 nova_compute[238603]: 2026-01-31 10:34:20.309 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:34:20 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:34:20 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1382747655' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:34:20 np0005603787 nova_compute[238603]: 2026-01-31 10:34:20.854 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:34:20 np0005603787 nova_compute[238603]: 2026-01-31 10:34:20.859 238607 DEBUG nova.compute.provider_tree [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed in ProviderTree for provider: 207962d2-1ba9-4db2-8533-2a30e7131f3e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 05:34:20 np0005603787 nova_compute[238603]: 2026-01-31 10:34:20.895 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed for provider 207962d2-1ba9-4db2-8533-2a30e7131f3e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 05:34:20 np0005603787 nova_compute[238603]: 2026-01-31 10:34:20.897 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 05:34:20 np0005603787 nova_compute[238603]: 2026-01-31 10:34:20.897 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:34:21 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1227: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:34:21 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Jan 31 05:34:21 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:34:21.429285) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 05:34:21 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Jan 31 05:34:21 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769855661429370, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1197, "num_deletes": 251, "total_data_size": 1866150, "memory_usage": 1889008, "flush_reason": "Manual Compaction"}
Jan 31 05:34:21 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Jan 31 05:34:21 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769855661529948, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1837716, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24494, "largest_seqno": 25690, "table_properties": {"data_size": 1831944, "index_size": 3167, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12024, "raw_average_key_size": 19, "raw_value_size": 1820419, "raw_average_value_size": 2989, "num_data_blocks": 142, "num_entries": 609, "num_filter_entries": 609, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769855538, "oldest_key_time": 1769855538, "file_creation_time": 1769855661, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a9067dea-12fb-43c2-8d5c-dbf66227f0e8", "db_session_id": "EXKALWXQ4I64EWVUMKE5", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:34:21 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 100721 microseconds, and 4228 cpu microseconds.
Jan 31 05:34:21 np0005603787 ceph-mon[75160]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 05:34:21 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:34:21.530013) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1837716 bytes OK
Jan 31 05:34:21 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:34:21.530042) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Jan 31 05:34:21 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:34:21.573753) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Jan 31 05:34:21 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:34:21.573803) EVENT_LOG_v1 {"time_micros": 1769855661573791, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 05:34:21 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:34:21.573831) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 05:34:21 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1860712, prev total WAL file size 1860712, number of live WAL files 2.
Jan 31 05:34:21 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:34:21 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:34:21.574767) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Jan 31 05:34:21 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 05:34:21 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1794KB)], [56(7487KB)]
Jan 31 05:34:21 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769855661574845, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 9504866, "oldest_snapshot_seqno": -1}
Jan 31 05:34:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 05:34:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2837604711' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 05:34:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 05:34:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2837604711' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 05:34:21 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 4727 keys, 7758241 bytes, temperature: kUnknown
Jan 31 05:34:21 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769855661687181, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 7758241, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7726322, "index_size": 19009, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11845, "raw_key_size": 118313, "raw_average_key_size": 25, "raw_value_size": 7640427, "raw_average_value_size": 1616, "num_data_blocks": 785, "num_entries": 4727, "num_filter_entries": 4727, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769853439, "oldest_key_time": 0, "file_creation_time": 1769855661, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a9067dea-12fb-43c2-8d5c-dbf66227f0e8", "db_session_id": "EXKALWXQ4I64EWVUMKE5", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:34:21 np0005603787 ceph-mon[75160]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 05:34:21 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:34:21.687398) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7758241 bytes
Jan 31 05:34:21 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:34:21.699009) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 84.6 rd, 69.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 7.3 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(9.4) write-amplify(4.2) OK, records in: 5241, records dropped: 514 output_compression: NoCompression
Jan 31 05:34:21 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:34:21.699158) EVENT_LOG_v1 {"time_micros": 1769855661699050, "job": 30, "event": "compaction_finished", "compaction_time_micros": 112401, "compaction_time_cpu_micros": 12400, "output_level": 6, "num_output_files": 1, "total_output_size": 7758241, "num_input_records": 5241, "num_output_records": 4727, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 05:34:21 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:34:21 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769855661699751, "job": 30, "event": "table_file_deletion", "file_number": 58}
Jan 31 05:34:21 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:34:21 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769855661701575, "job": 30, "event": "table_file_deletion", "file_number": 56}
Jan 31 05:34:21 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:34:21.574593) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:34:21 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:34:21.701616) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:34:21 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:34:21.701624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:34:21 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:34:21.701626) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:34:21 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:34:21.701629) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:34:21 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:34:21.701631) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:34:23 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:34:23 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:34:24 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:34:24 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:34:24 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:34:24 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:34:24 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:34:24 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:34:24 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:34:24 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:34:24 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:34:24 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:34:24 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:34:24 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:34:24 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:34:24 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:34:24 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:34:24 np0005603787 podman[254886]: 2026-01-31 10:34:24.84115112 +0000 UTC m=+0.051403817 container create 8e4e25ac5788c6fe94499a03f45786f9396031091f45e1e79806c4741bb0c024 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:34:24 np0005603787 systemd[1]: Started libpod-conmon-8e4e25ac5788c6fe94499a03f45786f9396031091f45e1e79806c4741bb0c024.scope.
Jan 31 05:34:24 np0005603787 nova_compute[238603]: 2026-01-31 10:34:24.892 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:34:24 np0005603787 podman[254886]: 2026-01-31 10:34:24.810024775 +0000 UTC m=+0.020277482 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:34:24 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:34:24 np0005603787 nova_compute[238603]: 2026-01-31 10:34:24.918 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:34:24 np0005603787 podman[254886]: 2026-01-31 10:34:24.945242004 +0000 UTC m=+0.155494681 container init 8e4e25ac5788c6fe94499a03f45786f9396031091f45e1e79806c4741bb0c024 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_hugle, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:34:24 np0005603787 podman[254886]: 2026-01-31 10:34:24.951881895 +0000 UTC m=+0.162134552 container start 8e4e25ac5788c6fe94499a03f45786f9396031091f45e1e79806c4741bb0c024 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_hugle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 05:34:24 np0005603787 laughing_hugle[254903]: 167 167
Jan 31 05:34:24 np0005603787 systemd[1]: libpod-8e4e25ac5788c6fe94499a03f45786f9396031091f45e1e79806c4741bb0c024.scope: Deactivated successfully.
Jan 31 05:34:24 np0005603787 conmon[254903]: conmon 8e4e25ac5788c6fe9449 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8e4e25ac5788c6fe94499a03f45786f9396031091f45e1e79806c4741bb0c024.scope/container/memory.events
Jan 31 05:34:24 np0005603787 podman[254886]: 2026-01-31 10:34:24.991541392 +0000 UTC m=+0.201794149 container attach 8e4e25ac5788c6fe94499a03f45786f9396031091f45e1e79806c4741bb0c024 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_hugle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 05:34:24 np0005603787 podman[254886]: 2026-01-31 10:34:24.99225107 +0000 UTC m=+0.202503797 container died 8e4e25ac5788c6fe94499a03f45786f9396031091f45e1e79806c4741bb0c024 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_hugle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:34:25 np0005603787 systemd[1]: var-lib-containers-storage-overlay-0464a003f120bb2f844fb3949a91babf3077fae0262e3be52c11b2ad5bc45665-merged.mount: Deactivated successfully.
Jan 31 05:34:25 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1229: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:34:25 np0005603787 podman[254886]: 2026-01-31 10:34:25.113523962 +0000 UTC m=+0.323776639 container remove 8e4e25ac5788c6fe94499a03f45786f9396031091f45e1e79806c4741bb0c024 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_hugle, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:34:25 np0005603787 systemd[1]: libpod-conmon-8e4e25ac5788c6fe94499a03f45786f9396031091f45e1e79806c4741bb0c024.scope: Deactivated successfully.
Jan 31 05:34:25 np0005603787 podman[254926]: 2026-01-31 10:34:25.310836576 +0000 UTC m=+0.103352445 container create c0393664477f0a365b4d78c513e516b90e666114aa1b14081c533fd49df9cd13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:34:25 np0005603787 podman[254926]: 2026-01-31 10:34:25.242170223 +0000 UTC m=+0.034686162 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:34:25 np0005603787 systemd[1]: Started libpod-conmon-c0393664477f0a365b4d78c513e516b90e666114aa1b14081c533fd49df9cd13.scope.
Jan 31 05:34:25 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:34:25 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/127339ff2cd9e38bb556480de99f72bf0106c60b0e9427cf6db6959a91aa51a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:34:25 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/127339ff2cd9e38bb556480de99f72bf0106c60b0e9427cf6db6959a91aa51a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:34:25 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/127339ff2cd9e38bb556480de99f72bf0106c60b0e9427cf6db6959a91aa51a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:34:25 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/127339ff2cd9e38bb556480de99f72bf0106c60b0e9427cf6db6959a91aa51a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:34:25 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/127339ff2cd9e38bb556480de99f72bf0106c60b0e9427cf6db6959a91aa51a8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:34:25 np0005603787 podman[254926]: 2026-01-31 10:34:25.423216956 +0000 UTC m=+0.215733115 container init c0393664477f0a365b4d78c513e516b90e666114aa1b14081c533fd49df9cd13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 05:34:25 np0005603787 podman[254926]: 2026-01-31 10:34:25.433864674 +0000 UTC m=+0.226380533 container start c0393664477f0a365b4d78c513e516b90e666114aa1b14081c533fd49df9cd13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:34:25 np0005603787 podman[254926]: 2026-01-31 10:34:25.47313136 +0000 UTC m=+0.265647309 container attach c0393664477f0a365b4d78c513e516b90e666114aa1b14081c533fd49df9cd13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_brattain, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 05:34:25 np0005603787 tender_brattain[254942]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:34:25 np0005603787 tender_brattain[254942]: --> All data devices are unavailable
Jan 31 05:34:25 np0005603787 systemd[1]: libpod-c0393664477f0a365b4d78c513e516b90e666114aa1b14081c533fd49df9cd13.scope: Deactivated successfully.
Jan 31 05:34:25 np0005603787 podman[254926]: 2026-01-31 10:34:25.888042571 +0000 UTC m=+0.680558460 container died c0393664477f0a365b4d78c513e516b90e666114aa1b14081c533fd49df9cd13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_brattain, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 05:34:25 np0005603787 systemd[1]: var-lib-containers-storage-overlay-127339ff2cd9e38bb556480de99f72bf0106c60b0e9427cf6db6959a91aa51a8-merged.mount: Deactivated successfully.
Jan 31 05:34:26 np0005603787 podman[254926]: 2026-01-31 10:34:26.053817079 +0000 UTC m=+0.846332978 container remove c0393664477f0a365b4d78c513e516b90e666114aa1b14081c533fd49df9cd13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_brattain, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 31 05:34:26 np0005603787 systemd[1]: libpod-conmon-c0393664477f0a365b4d78c513e516b90e666114aa1b14081c533fd49df9cd13.scope: Deactivated successfully.
Jan 31 05:34:26 np0005603787 podman[255038]: 2026-01-31 10:34:26.527288719 +0000 UTC m=+0.051418276 container create c3d3ea3ed7efb3122f7f860af9ee84c59e9bff922355f4983d11785781214684 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_yonath, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:34:26 np0005603787 systemd[1]: Started libpod-conmon-c3d3ea3ed7efb3122f7f860af9ee84c59e9bff922355f4983d11785781214684.scope.
Jan 31 05:34:26 np0005603787 podman[255038]: 2026-01-31 10:34:26.498025605 +0000 UTC m=+0.022155192 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:34:26 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:34:26 np0005603787 podman[255038]: 2026-01-31 10:34:26.634664773 +0000 UTC m=+0.158794330 container init c3d3ea3ed7efb3122f7f860af9ee84c59e9bff922355f4983d11785781214684 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_yonath, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 05:34:26 np0005603787 podman[255038]: 2026-01-31 10:34:26.63933784 +0000 UTC m=+0.163467427 container start c3d3ea3ed7efb3122f7f860af9ee84c59e9bff922355f4983d11785781214684 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_yonath, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:34:26 np0005603787 crazy_yonath[255054]: 167 167
Jan 31 05:34:26 np0005603787 systemd[1]: libpod-c3d3ea3ed7efb3122f7f860af9ee84c59e9bff922355f4983d11785781214684.scope: Deactivated successfully.
Jan 31 05:34:26 np0005603787 podman[255038]: 2026-01-31 10:34:26.685997187 +0000 UTC m=+0.210126784 container attach c3d3ea3ed7efb3122f7f860af9ee84c59e9bff922355f4983d11785781214684 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 05:34:26 np0005603787 podman[255038]: 2026-01-31 10:34:26.6868661 +0000 UTC m=+0.210995687 container died c3d3ea3ed7efb3122f7f860af9ee84c59e9bff922355f4983d11785781214684 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_yonath, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:34:26 np0005603787 systemd[1]: var-lib-containers-storage-overlay-f4ed004cdb5285bd6176a0adaf515a15a31f9f867de7c7613fd81e9f50484411-merged.mount: Deactivated successfully.
Jan 31 05:34:26 np0005603787 podman[255038]: 2026-01-31 10:34:26.910992732 +0000 UTC m=+0.435122319 container remove c3d3ea3ed7efb3122f7f860af9ee84c59e9bff922355f4983d11785781214684 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_yonath, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:34:26 np0005603787 systemd[1]: libpod-conmon-c3d3ea3ed7efb3122f7f860af9ee84c59e9bff922355f4983d11785781214684.scope: Deactivated successfully.
Jan 31 05:34:27 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:34:27 np0005603787 podman[255080]: 2026-01-31 10:34:27.117647991 +0000 UTC m=+0.069502378 container create 425a174dc7800db8e08c9ade6c4e910e49036144c60ca0ff338facb2cf711157 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:34:27 np0005603787 nova_compute[238603]: 2026-01-31 10:34:27.141 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:34:27 np0005603787 podman[255080]: 2026-01-31 10:34:27.084020768 +0000 UTC m=+0.035875205 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:34:27 np0005603787 systemd[1]: Started libpod-conmon-425a174dc7800db8e08c9ade6c4e910e49036144c60ca0ff338facb2cf711157.scope.
Jan 31 05:34:27 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:34:27 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b44721bdd1a032cee576330b7e3abe555e4caf5628715501bda6d8903c37247/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:34:27 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b44721bdd1a032cee576330b7e3abe555e4caf5628715501bda6d8903c37247/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:34:27 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b44721bdd1a032cee576330b7e3abe555e4caf5628715501bda6d8903c37247/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:34:27 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b44721bdd1a032cee576330b7e3abe555e4caf5628715501bda6d8903c37247/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:34:27 np0005603787 podman[255080]: 2026-01-31 10:34:27.293315018 +0000 UTC m=+0.245169495 container init 425a174dc7800db8e08c9ade6c4e910e49036144c60ca0ff338facb2cf711157 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 05:34:27 np0005603787 podman[255080]: 2026-01-31 10:34:27.303466654 +0000 UTC m=+0.255321061 container start 425a174dc7800db8e08c9ade6c4e910e49036144c60ca0ff338facb2cf711157 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 05:34:27 np0005603787 podman[255080]: 2026-01-31 10:34:27.334845666 +0000 UTC m=+0.286700093 container attach 425a174dc7800db8e08c9ade6c4e910e49036144c60ca0ff338facb2cf711157 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_varahamihira, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]: {
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:    "0": [
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:        {
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:            "devices": [
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "/dev/loop3"
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:            ],
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:            "lv_name": "ceph_lv0",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:            "lv_size": "21470642176",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:            "name": "ceph_lv0",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:            "tags": {
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "ceph.cluster_name": "ceph",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "ceph.crush_device_class": "",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "ceph.encrypted": "0",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "ceph.objectstore": "bluestore",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "ceph.osd_id": "0",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "ceph.type": "block",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "ceph.vdo": "0",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "ceph.with_tpm": "0"
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:            },
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:            "type": "block",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:            "vg_name": "ceph_vg0"
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:        }
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:    ],
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:    "1": [
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:        {
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:            "devices": [
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "/dev/loop4"
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:            ],
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:            "lv_name": "ceph_lv1",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:            "lv_size": "21470642176",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:            "name": "ceph_lv1",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:            "tags": {
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "ceph.cluster_name": "ceph",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "ceph.crush_device_class": "",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "ceph.encrypted": "0",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "ceph.objectstore": "bluestore",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "ceph.osd_id": "1",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "ceph.type": "block",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "ceph.vdo": "0",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "ceph.with_tpm": "0"
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:            },
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:            "type": "block",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:            "vg_name": "ceph_vg1"
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:        }
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:    ],
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:    "2": [
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:        {
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:            "devices": [
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "/dev/loop5"
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:            ],
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:            "lv_name": "ceph_lv2",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:            "lv_size": "21470642176",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:            "name": "ceph_lv2",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:            "tags": {
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "ceph.cluster_name": "ceph",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "ceph.crush_device_class": "",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "ceph.encrypted": "0",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "ceph.objectstore": "bluestore",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "ceph.osd_id": "2",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "ceph.type": "block",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "ceph.vdo": "0",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:                "ceph.with_tpm": "0"
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:            },
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:            "type": "block",
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:            "vg_name": "ceph_vg2"
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:        }
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]:    ]
Jan 31 05:34:27 np0005603787 cool_varahamihira[255098]: }
Jan 31 05:34:27 np0005603787 systemd[1]: libpod-425a174dc7800db8e08c9ade6c4e910e49036144c60ca0ff338facb2cf711157.scope: Deactivated successfully.
Jan 31 05:34:27 np0005603787 podman[255080]: 2026-01-31 10:34:27.610385574 +0000 UTC m=+0.562239961 container died 425a174dc7800db8e08c9ade6c4e910e49036144c60ca0ff338facb2cf711157 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_varahamihira, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 05:34:27 np0005603787 systemd[1]: var-lib-containers-storage-overlay-0b44721bdd1a032cee576330b7e3abe555e4caf5628715501bda6d8903c37247-merged.mount: Deactivated successfully.
Jan 31 05:34:27 np0005603787 podman[255080]: 2026-01-31 10:34:27.809149568 +0000 UTC m=+0.761003975 container remove 425a174dc7800db8e08c9ade6c4e910e49036144c60ca0ff338facb2cf711157 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_varahamihira, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 05:34:27 np0005603787 systemd[1]: libpod-conmon-425a174dc7800db8e08c9ade6c4e910e49036144c60ca0ff338facb2cf711157.scope: Deactivated successfully.
Jan 31 05:34:28 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:34:28 np0005603787 podman[255186]: 2026-01-31 10:34:28.301227142 +0000 UTC m=+0.029685246 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:34:28 np0005603787 podman[255186]: 2026-01-31 10:34:28.406203461 +0000 UTC m=+0.134661535 container create 740cbb734ececf822b78d1457d0131c6e7f68a6d73c2743c56e2c4a728e0b47f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_pascal, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 05:34:28 np0005603787 systemd[1]: Started libpod-conmon-740cbb734ececf822b78d1457d0131c6e7f68a6d73c2743c56e2c4a728e0b47f.scope.
Jan 31 05:34:28 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:34:28 np0005603787 podman[255186]: 2026-01-31 10:34:28.532549831 +0000 UTC m=+0.261008015 container init 740cbb734ececf822b78d1457d0131c6e7f68a6d73c2743c56e2c4a728e0b47f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:34:28 np0005603787 podman[255186]: 2026-01-31 10:34:28.538474551 +0000 UTC m=+0.266932665 container start 740cbb734ececf822b78d1457d0131c6e7f68a6d73c2743c56e2c4a728e0b47f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_pascal, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:34:28 np0005603787 trusting_pascal[255203]: 167 167
Jan 31 05:34:28 np0005603787 systemd[1]: libpod-740cbb734ececf822b78d1457d0131c6e7f68a6d73c2743c56e2c4a728e0b47f.scope: Deactivated successfully.
Jan 31 05:34:28 np0005603787 conmon[255203]: conmon 740cbb734ececf822b78 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-740cbb734ececf822b78d1457d0131c6e7f68a6d73c2743c56e2c4a728e0b47f.scope/container/memory.events
Jan 31 05:34:28 np0005603787 podman[255186]: 2026-01-31 10:34:28.607786882 +0000 UTC m=+0.336245076 container attach 740cbb734ececf822b78d1457d0131c6e7f68a6d73c2743c56e2c4a728e0b47f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 05:34:28 np0005603787 podman[255186]: 2026-01-31 10:34:28.60845066 +0000 UTC m=+0.336908844 container died 740cbb734ececf822b78d1457d0131c6e7f68a6d73c2743c56e2c4a728e0b47f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_pascal, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:34:28 np0005603787 systemd[1]: var-lib-containers-storage-overlay-cd3426978581b1371a40599c3d663cb383b963279e28fd3fe3c2aecb80543c9b-merged.mount: Deactivated successfully.
Jan 31 05:34:28 np0005603787 podman[255186]: 2026-01-31 10:34:28.674897453 +0000 UTC m=+0.403355527 container remove 740cbb734ececf822b78d1457d0131c6e7f68a6d73c2743c56e2c4a728e0b47f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_pascal, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 05:34:28 np0005603787 systemd[1]: libpod-conmon-740cbb734ececf822b78d1457d0131c6e7f68a6d73c2743c56e2c4a728e0b47f.scope: Deactivated successfully.
Jan 31 05:34:28 np0005603787 podman[255229]: 2026-01-31 10:34:28.830634259 +0000 UTC m=+0.050348826 container create 3e2826aaff801a47f2edde20ed02dea9b32b1c33783be9383558f9a840f507b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 05:34:28 np0005603787 systemd[1]: Started libpod-conmon-3e2826aaff801a47f2edde20ed02dea9b32b1c33783be9383558f9a840f507b4.scope.
Jan 31 05:34:28 np0005603787 podman[255229]: 2026-01-31 10:34:28.804605533 +0000 UTC m=+0.024320160 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:34:28 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:34:28 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb3cdbeedaacb0f05d857ebeda98fee49e3446a0513cfb1c446ce3353f1132bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:34:28 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb3cdbeedaacb0f05d857ebeda98fee49e3446a0513cfb1c446ce3353f1132bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:34:28 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb3cdbeedaacb0f05d857ebeda98fee49e3446a0513cfb1c446ce3353f1132bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:34:28 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb3cdbeedaacb0f05d857ebeda98fee49e3446a0513cfb1c446ce3353f1132bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:34:28 np0005603787 podman[255229]: 2026-01-31 10:34:28.929321238 +0000 UTC m=+0.149035875 container init 3e2826aaff801a47f2edde20ed02dea9b32b1c33783be9383558f9a840f507b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 05:34:28 np0005603787 podman[255229]: 2026-01-31 10:34:28.940416049 +0000 UTC m=+0.160130586 container start 3e2826aaff801a47f2edde20ed02dea9b32b1c33783be9383558f9a840f507b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 05:34:28 np0005603787 podman[255229]: 2026-01-31 10:34:28.944578082 +0000 UTC m=+0.164292659 container attach 3e2826aaff801a47f2edde20ed02dea9b32b1c33783be9383558f9a840f507b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chatelet, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:34:29 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1231: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:34:29 np0005603787 lvm[255322]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:34:29 np0005603787 lvm[255324]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:34:29 np0005603787 lvm[255324]: VG ceph_vg1 finished
Jan 31 05:34:29 np0005603787 lvm[255322]: VG ceph_vg0 finished
Jan 31 05:34:29 np0005603787 lvm[255326]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:34:29 np0005603787 lvm[255326]: VG ceph_vg2 finished
Jan 31 05:34:29 np0005603787 intelligent_chatelet[255245]: {}
Jan 31 05:34:29 np0005603787 systemd[1]: libpod-3e2826aaff801a47f2edde20ed02dea9b32b1c33783be9383558f9a840f507b4.scope: Deactivated successfully.
Jan 31 05:34:29 np0005603787 podman[255229]: 2026-01-31 10:34:29.697007882 +0000 UTC m=+0.916722459 container died 3e2826aaff801a47f2edde20ed02dea9b32b1c33783be9383558f9a840f507b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:34:29 np0005603787 systemd[1]: libpod-3e2826aaff801a47f2edde20ed02dea9b32b1c33783be9383558f9a840f507b4.scope: Consumed 1.137s CPU time.
Jan 31 05:34:29 np0005603787 systemd[1]: var-lib-containers-storage-overlay-eb3cdbeedaacb0f05d857ebeda98fee49e3446a0513cfb1c446ce3353f1132bc-merged.mount: Deactivated successfully.
Jan 31 05:34:29 np0005603787 podman[255229]: 2026-01-31 10:34:29.753181027 +0000 UTC m=+0.972895574 container remove 3e2826aaff801a47f2edde20ed02dea9b32b1c33783be9383558f9a840f507b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chatelet, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 05:34:29 np0005603787 systemd[1]: libpod-conmon-3e2826aaff801a47f2edde20ed02dea9b32b1c33783be9383558f9a840f507b4.scope: Deactivated successfully.
Jan 31 05:34:29 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:34:29 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:34:29 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:34:29 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:34:30 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:34:30 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:34:31 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:34:31 np0005603787 nova_compute[238603]: 2026-01-31 10:34:31.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:34:31 np0005603787 nova_compute[238603]: 2026-01-31 10:34:31.104 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 05:34:33 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1233: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:34:33 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:34:35 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:34:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:34:37.080 154765 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:34:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:34:37.080 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:34:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:34:37.080 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:34:37 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:34:38 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:34:39 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1236: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:34:40 np0005603787 systemd-logind[786]: New session 53 of user zuul.
Jan 31 05:34:40 np0005603787 systemd[1]: Started Session 53 of User zuul.
Jan 31 05:34:41 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:34:41 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:34:41.161 154765 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:08:49', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ce:80:fe:bf:9d:90'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 05:34:41 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:34:41.162 154765 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 05:34:41 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:34:41.163 154765 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ef41023c-ae05-4c9a-b1cb-d6bd86d05fb4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 05:34:43 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:34:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_10:34:43
Jan 31 05:34:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:34:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] do_upmap
Jan 31 05:34:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', 'images', 'default.rgw.control', '.mgr', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', 'volumes', 'cephfs.cephfs.meta']
Jan 31 05:34:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:34:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:34:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:34:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:34:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:34:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:34:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:34:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:34:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:34:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:34:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:34:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:34:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:34:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:34:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:34:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:34:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:34:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:34:45 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1239: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:34:45 np0005603787 systemd-logind[786]: New session 54 of user zuul.
Jan 31 05:34:45 np0005603787 systemd[1]: Started Session 54 of User zuul.
Jan 31 05:34:46 np0005603787 systemd[1]: Reloading.
Jan 31 05:34:46 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:34:46 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:34:47 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:34:47 np0005603787 systemd[1]: Reloading.
Jan 31 05:34:47 np0005603787 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:34:47 np0005603787 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:34:47 np0005603787 systemd[1]: Starting Podman API Socket...
Jan 31 05:34:47 np0005603787 systemd[1]: Listening on Podman API Socket.
Jan 31 05:34:47 np0005603787 dbus-broker-launch[774]: avc:  op=setenforce lsm=selinux enforcing=0 res=1
Jan 31 05:34:47 np0005603787 systemd[1]: podman.socket: Deactivated successfully.
Jan 31 05:34:47 np0005603787 systemd[1]: Closed Podman API Socket.
Jan 31 05:34:47 np0005603787 systemd[1]: Stopping Podman API Socket...
Jan 31 05:34:47 np0005603787 systemd[1]: Starting Podman API Socket...
Jan 31 05:34:47 np0005603787 systemd[1]: Listening on Podman API Socket.
Jan 31 05:34:47 np0005603787 systemd-logind[786]: New session 55 of user zuul.
Jan 31 05:34:47 np0005603787 systemd[1]: Started Session 55 of User zuul.
Jan 31 05:34:47 np0005603787 systemd[1]: Starting Podman API Service...
Jan 31 05:34:47 np0005603787 systemd[1]: Started Podman API Service.
Jan 31 05:34:47 np0005603787 podman[255773]: time="2026-01-31T10:34:47Z" level=info msg="/usr/bin/podman filtering at log level info"
Jan 31 05:34:47 np0005603787 podman[255773]: time="2026-01-31T10:34:47Z" level=info msg="Setting parallel job count to 25"
Jan 31 05:34:47 np0005603787 podman[255773]: time="2026-01-31T10:34:47Z" level=info msg="Using sqlite as database backend"
Jan 31 05:34:47 np0005603787 podman[255773]: time="2026-01-31T10:34:47Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Jan 31 05:34:47 np0005603787 podman[255773]: time="2026-01-31T10:34:47Z" level=info msg="Using systemd socket activation to determine API endpoint"
Jan 31 05:34:47 np0005603787 podman[255773]: time="2026-01-31T10:34:47Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Jan 31 05:34:47 np0005603787 podman[255773]: @ - - [31/Jan/2026:10:34:47 +0000] "HEAD /v4.7.0/libpod/_ping HTTP/1.1" 200 0 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Jan 31 05:34:47 np0005603787 podman[255773]: @ - - [31/Jan/2026:10:34:47 +0000] "GET /v4.7.0/libpod/containers/json HTTP/1.1" 200 22534 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Jan 31 05:34:48 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:34:49 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:34:49 np0005603787 podman[255788]: 2026-01-31 10:34:49.842983147 +0000 UTC m=+0.063493355 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 05:34:49 np0005603787 podman[255787]: 2026-01-31 10:34:49.875645663 +0000 UTC m=+0.099629475 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 05:34:51 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:34:53 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1243: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:34:53 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:34:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:34:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:34:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:34:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:34:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:34:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:34:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:34:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:34:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:34:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:34:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 2.450943614167069e-07 of space, bias 1.0, pg target 7.352830842501207e-05 quantized to 32 (current 32)
Jan 31 05:34:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:34:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.527403468629877e-06 of space, bias 4.0, pg target 0.0018328841623558524 quantized to 16 (current 16)
Jan 31 05:34:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:34:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:34:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:34:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:34:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:34:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 05:34:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:34:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:34:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:34:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:34:55 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1244: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:34:57 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1245: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:34:58 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:34:59 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1246: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:35:01 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1247: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:35:02 np0005603787 podman[255773]: time="2026-01-31T10:35:02Z" level=info msg="Received shutdown.Stop(), terminating!" PID=255773
Jan 31 05:35:02 np0005603787 systemd[1]: podman.service: Deactivated successfully.
Jan 31 05:35:03 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:35:03 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:35:05 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1249: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:35:07 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:35:08 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:35:09 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:35:10 np0005603787 systemd[1]: session-53.scope: Deactivated successfully.
Jan 31 05:35:10 np0005603787 systemd-logind[786]: Session 53 logged out. Waiting for processes to exit.
Jan 31 05:35:10 np0005603787 systemd-logind[786]: Removed session 53.
Jan 31 05:35:10 np0005603787 systemd-logind[786]: Session 54 logged out. Waiting for processes to exit.
Jan 31 05:35:10 np0005603787 systemd[1]: session-54.scope: Deactivated successfully.
Jan 31 05:35:10 np0005603787 systemd[1]: session-54.scope: Consumed 1.000s CPU time.
Jan 31 05:35:10 np0005603787 systemd-logind[786]: Removed session 54.
Jan 31 05:35:11 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1252: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:35:11 np0005603787 systemd[1]: session-55.scope: Deactivated successfully.
Jan 31 05:35:11 np0005603787 systemd-logind[786]: Session 55 logged out. Waiting for processes to exit.
Jan 31 05:35:11 np0005603787 systemd-logind[786]: Removed session 55.
Jan 31 05:35:13 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1253: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:35:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:35:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:35:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:35:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:35:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:35:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:35:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:35:14 np0005603787 nova_compute[238603]: 2026-01-31 10:35:14.134 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:35:14 np0005603787 nova_compute[238603]: 2026-01-31 10:35:14.135 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:35:14 np0005603787 nova_compute[238603]: 2026-01-31 10:35:14.135 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 05:35:15 np0005603787 nova_compute[238603]: 2026-01-31 10:35:15.103 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:35:15 np0005603787 nova_compute[238603]: 2026-01-31 10:35:15.103 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 05:35:15 np0005603787 nova_compute[238603]: 2026-01-31 10:35:15.103 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 05:35:15 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:35:15 np0005603787 nova_compute[238603]: 2026-01-31 10:35:15.325 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 05:35:15 np0005603787 nova_compute[238603]: 2026-01-31 10:35:15.325 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:35:15 np0005603787 nova_compute[238603]: 2026-01-31 10:35:15.326 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:35:17 np0005603787 nova_compute[238603]: 2026-01-31 10:35:17.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:35:17 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1255: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:35:18 np0005603787 nova_compute[238603]: 2026-01-31 10:35:18.098 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:35:18 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:35:19 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1256: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:35:20 np0005603787 podman[255883]: 2026-01-31 10:35:20.835619285 +0000 UTC m=+0.047857800 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Jan 31 05:35:20 np0005603787 podman[255882]: 2026-01-31 10:35:20.886202858 +0000 UTC m=+0.096588093 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 05:35:21 np0005603787 nova_compute[238603]: 2026-01-31 10:35:21.101 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:35:21 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1257: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:35:21 np0005603787 nova_compute[238603]: 2026-01-31 10:35:21.153 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:35:21 np0005603787 nova_compute[238603]: 2026-01-31 10:35:21.154 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:35:21 np0005603787 nova_compute[238603]: 2026-01-31 10:35:21.154 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:35:21 np0005603787 nova_compute[238603]: 2026-01-31 10:35:21.154 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 05:35:21 np0005603787 nova_compute[238603]: 2026-01-31 10:35:21.155 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:35:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:35:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/561342190' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:35:21 np0005603787 nova_compute[238603]: 2026-01-31 10:35:21.634 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:35:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 05:35:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/681969393' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 05:35:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 05:35:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/681969393' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 05:35:21 np0005603787 nova_compute[238603]: 2026-01-31 10:35:21.776 238607 WARNING nova.virt.libvirt.driver [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 05:35:21 np0005603787 nova_compute[238603]: 2026-01-31 10:35:21.778 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5120MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 05:35:21 np0005603787 nova_compute[238603]: 2026-01-31 10:35:21.778 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:35:21 np0005603787 nova_compute[238603]: 2026-01-31 10:35:21.778 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:35:21 np0005603787 nova_compute[238603]: 2026-01-31 10:35:21.839 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 05:35:21 np0005603787 nova_compute[238603]: 2026-01-31 10:35:21.839 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 05:35:21 np0005603787 nova_compute[238603]: 2026-01-31 10:35:21.857 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:35:22 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:35:22 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/831319523' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:35:22 np0005603787 nova_compute[238603]: 2026-01-31 10:35:22.393 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:35:22 np0005603787 nova_compute[238603]: 2026-01-31 10:35:22.398 238607 DEBUG nova.compute.provider_tree [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed in ProviderTree for provider: 207962d2-1ba9-4db2-8533-2a30e7131f3e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 05:35:22 np0005603787 nova_compute[238603]: 2026-01-31 10:35:22.418 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed for provider 207962d2-1ba9-4db2-8533-2a30e7131f3e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 05:35:22 np0005603787 nova_compute[238603]: 2026-01-31 10:35:22.420 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 05:35:22 np0005603787 nova_compute[238603]: 2026-01-31 10:35:22.421 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:35:23 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1258: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:35:23 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:35:25 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1259: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:35:27 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1260: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:35:27 np0005603787 nova_compute[238603]: 2026-01-31 10:35:27.422 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:35:28 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:35:29 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1261: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:35:30 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:35:30 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:35:30 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:35:30 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:35:30 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:35:30 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:35:30 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:35:30 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:35:30 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:35:30 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:35:30 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:35:30 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:35:30 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:35:30 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:35:30 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:35:30 np0005603787 podman[256115]: 2026-01-31 10:35:30.985042742 +0000 UTC m=+0.062431486 container create e2ea1636bc2ab316af299264bb78294f43640c80daa958e03f9d8d0cf2e6e4d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 05:35:31 np0005603787 systemd[1]: Started libpod-conmon-e2ea1636bc2ab316af299264bb78294f43640c80daa958e03f9d8d0cf2e6e4d9.scope.
Jan 31 05:35:31 np0005603787 podman[256115]: 2026-01-31 10:35:30.957398652 +0000 UTC m=+0.034787446 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:35:31 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:35:31 np0005603787 podman[256115]: 2026-01-31 10:35:31.07120079 +0000 UTC m=+0.148589574 container init e2ea1636bc2ab316af299264bb78294f43640c80daa958e03f9d8d0cf2e6e4d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_vaughan, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:35:31 np0005603787 podman[256115]: 2026-01-31 10:35:31.079035033 +0000 UTC m=+0.156423777 container start e2ea1636bc2ab316af299264bb78294f43640c80daa958e03f9d8d0cf2e6e4d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 05:35:31 np0005603787 podman[256115]: 2026-01-31 10:35:31.083203996 +0000 UTC m=+0.160592790 container attach e2ea1636bc2ab316af299264bb78294f43640c80daa958e03f9d8d0cf2e6e4d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_vaughan, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 05:35:31 np0005603787 reverent_vaughan[256131]: 167 167
Jan 31 05:35:31 np0005603787 systemd[1]: libpod-e2ea1636bc2ab316af299264bb78294f43640c80daa958e03f9d8d0cf2e6e4d9.scope: Deactivated successfully.
Jan 31 05:35:31 np0005603787 podman[256115]: 2026-01-31 10:35:31.085636122 +0000 UTC m=+0.163024866 container died e2ea1636bc2ab316af299264bb78294f43640c80daa958e03f9d8d0cf2e6e4d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_vaughan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 05:35:31 np0005603787 systemd[1]: var-lib-containers-storage-overlay-ff9f6d0fe54c0721b307f942f14660856cf2d43fc54b9f769b8077b828a85c83-merged.mount: Deactivated successfully.
Jan 31 05:35:31 np0005603787 podman[256115]: 2026-01-31 10:35:31.132774841 +0000 UTC m=+0.210163565 container remove e2ea1636bc2ab316af299264bb78294f43640c80daa958e03f9d8d0cf2e6e4d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_vaughan, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:35:31 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1262: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:35:31 np0005603787 systemd[1]: libpod-conmon-e2ea1636bc2ab316af299264bb78294f43640c80daa958e03f9d8d0cf2e6e4d9.scope: Deactivated successfully.
Jan 31 05:35:31 np0005603787 podman[256154]: 2026-01-31 10:35:31.274120507 +0000 UTC m=+0.047885971 container create f7292a25b11e6a8ca1f406ee7931f22dbcc286f060deec8d57de5fffb54a4a26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_vaughan, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:35:31 np0005603787 systemd[1]: Started libpod-conmon-f7292a25b11e6a8ca1f406ee7931f22dbcc286f060deec8d57de5fffb54a4a26.scope.
Jan 31 05:35:31 np0005603787 podman[256154]: 2026-01-31 10:35:31.249029986 +0000 UTC m=+0.022795510 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:35:31 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:35:31 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f14c0854a4eb870bb2b48976a27e01472bd1089286c1476962b31181d507c0f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:35:31 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f14c0854a4eb870bb2b48976a27e01472bd1089286c1476962b31181d507c0f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:35:31 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f14c0854a4eb870bb2b48976a27e01472bd1089286c1476962b31181d507c0f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:35:31 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f14c0854a4eb870bb2b48976a27e01472bd1089286c1476962b31181d507c0f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:35:31 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f14c0854a4eb870bb2b48976a27e01472bd1089286c1476962b31181d507c0f9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:35:31 np0005603787 podman[256154]: 2026-01-31 10:35:31.375333274 +0000 UTC m=+0.149098728 container init f7292a25b11e6a8ca1f406ee7931f22dbcc286f060deec8d57de5fffb54a4a26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_vaughan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:35:31 np0005603787 podman[256154]: 2026-01-31 10:35:31.38843555 +0000 UTC m=+0.162200974 container start f7292a25b11e6a8ca1f406ee7931f22dbcc286f060deec8d57de5fffb54a4a26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_vaughan, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:35:31 np0005603787 podman[256154]: 2026-01-31 10:35:31.393458446 +0000 UTC m=+0.167223910 container attach f7292a25b11e6a8ca1f406ee7931f22dbcc286f060deec8d57de5fffb54a4a26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 05:35:31 np0005603787 nice_vaughan[256170]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:35:31 np0005603787 nice_vaughan[256170]: --> All data devices are unavailable
Jan 31 05:35:31 np0005603787 systemd[1]: libpod-f7292a25b11e6a8ca1f406ee7931f22dbcc286f060deec8d57de5fffb54a4a26.scope: Deactivated successfully.
Jan 31 05:35:31 np0005603787 podman[256154]: 2026-01-31 10:35:31.901253037 +0000 UTC m=+0.675018471 container died f7292a25b11e6a8ca1f406ee7931f22dbcc286f060deec8d57de5fffb54a4a26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:35:31 np0005603787 systemd[1]: var-lib-containers-storage-overlay-f14c0854a4eb870bb2b48976a27e01472bd1089286c1476962b31181d507c0f9-merged.mount: Deactivated successfully.
Jan 31 05:35:31 np0005603787 podman[256154]: 2026-01-31 10:35:31.958232374 +0000 UTC m=+0.731997808 container remove f7292a25b11e6a8ca1f406ee7931f22dbcc286f060deec8d57de5fffb54a4a26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_vaughan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:35:31 np0005603787 systemd[1]: libpod-conmon-f7292a25b11e6a8ca1f406ee7931f22dbcc286f060deec8d57de5fffb54a4a26.scope: Deactivated successfully.
Jan 31 05:35:32 np0005603787 podman[256264]: 2026-01-31 10:35:32.377289206 +0000 UTC m=+0.037651323 container create fbaa0de3fd3259f496587a025387ec00feabddd330ec85c98730d698c660b4f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 05:35:32 np0005603787 systemd[1]: Started libpod-conmon-fbaa0de3fd3259f496587a025387ec00feabddd330ec85c98730d698c660b4f6.scope.
Jan 31 05:35:32 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:35:32 np0005603787 podman[256264]: 2026-01-31 10:35:32.450376889 +0000 UTC m=+0.110739026 container init fbaa0de3fd3259f496587a025387ec00feabddd330ec85c98730d698c660b4f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:35:32 np0005603787 podman[256264]: 2026-01-31 10:35:32.456523977 +0000 UTC m=+0.116886084 container start fbaa0de3fd3259f496587a025387ec00feabddd330ec85c98730d698c660b4f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:35:32 np0005603787 podman[256264]: 2026-01-31 10:35:32.361693123 +0000 UTC m=+0.022055230 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:35:32 np0005603787 great_engelbart[256281]: 167 167
Jan 31 05:35:32 np0005603787 systemd[1]: libpod-fbaa0de3fd3259f496587a025387ec00feabddd330ec85c98730d698c660b4f6.scope: Deactivated successfully.
Jan 31 05:35:32 np0005603787 podman[256264]: 2026-01-31 10:35:32.460665519 +0000 UTC m=+0.121027656 container attach fbaa0de3fd3259f496587a025387ec00feabddd330ec85c98730d698c660b4f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_engelbart, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 05:35:32 np0005603787 podman[256264]: 2026-01-31 10:35:32.461664527 +0000 UTC m=+0.122026614 container died fbaa0de3fd3259f496587a025387ec00feabddd330ec85c98730d698c660b4f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 05:35:32 np0005603787 systemd[1]: var-lib-containers-storage-overlay-0f082ee9eeabb03a6e0e982ef6f6b9ee9e91077b6f8658cfa92b214809ed8294-merged.mount: Deactivated successfully.
Jan 31 05:35:32 np0005603787 podman[256264]: 2026-01-31 10:35:32.500209092 +0000 UTC m=+0.160571189 container remove fbaa0de3fd3259f496587a025387ec00feabddd330ec85c98730d698c660b4f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_engelbart, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 05:35:32 np0005603787 systemd[1]: libpod-conmon-fbaa0de3fd3259f496587a025387ec00feabddd330ec85c98730d698c660b4f6.scope: Deactivated successfully.
Jan 31 05:35:32 np0005603787 podman[256306]: 2026-01-31 10:35:32.626007287 +0000 UTC m=+0.041494068 container create 73b1964ae82c1168161a56db3b9eb051c6aec1fd089a4243eb1182469168b77d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_gates, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 05:35:32 np0005603787 systemd[1]: Started libpod-conmon-73b1964ae82c1168161a56db3b9eb051c6aec1fd089a4243eb1182469168b77d.scope.
Jan 31 05:35:32 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:35:32 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66f87d32caf5a805fa3ae5c2ec34167982e743f5c582e22ef86274be45243e76/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:35:32 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66f87d32caf5a805fa3ae5c2ec34167982e743f5c582e22ef86274be45243e76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:35:32 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66f87d32caf5a805fa3ae5c2ec34167982e743f5c582e22ef86274be45243e76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:35:32 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66f87d32caf5a805fa3ae5c2ec34167982e743f5c582e22ef86274be45243e76/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:35:32 np0005603787 podman[256306]: 2026-01-31 10:35:32.606343943 +0000 UTC m=+0.021830734 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:35:32 np0005603787 podman[256306]: 2026-01-31 10:35:32.700358524 +0000 UTC m=+0.115845285 container init 73b1964ae82c1168161a56db3b9eb051c6aec1fd089a4243eb1182469168b77d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_gates, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 05:35:32 np0005603787 podman[256306]: 2026-01-31 10:35:32.714001475 +0000 UTC m=+0.129488236 container start 73b1964ae82c1168161a56db3b9eb051c6aec1fd089a4243eb1182469168b77d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_gates, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Jan 31 05:35:32 np0005603787 podman[256306]: 2026-01-31 10:35:32.718551768 +0000 UTC m=+0.134038559 container attach 73b1964ae82c1168161a56db3b9eb051c6aec1fd089a4243eb1182469168b77d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_gates, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 05:35:32 np0005603787 agitated_gates[256322]: {
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:    "0": [
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:        {
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:            "devices": [
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "/dev/loop3"
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:            ],
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:            "lv_name": "ceph_lv0",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:            "lv_size": "21470642176",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:            "name": "ceph_lv0",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:            "tags": {
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "ceph.cluster_name": "ceph",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "ceph.crush_device_class": "",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "ceph.encrypted": "0",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "ceph.objectstore": "bluestore",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "ceph.osd_id": "0",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "ceph.type": "block",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "ceph.vdo": "0",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "ceph.with_tpm": "0"
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:            },
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:            "type": "block",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:            "vg_name": "ceph_vg0"
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:        }
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:    ],
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:    "1": [
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:        {
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:            "devices": [
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "/dev/loop4"
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:            ],
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:            "lv_name": "ceph_lv1",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:            "lv_size": "21470642176",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:            "name": "ceph_lv1",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:            "tags": {
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "ceph.cluster_name": "ceph",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "ceph.crush_device_class": "",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "ceph.encrypted": "0",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "ceph.objectstore": "bluestore",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "ceph.osd_id": "1",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "ceph.type": "block",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "ceph.vdo": "0",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "ceph.with_tpm": "0"
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:            },
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:            "type": "block",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:            "vg_name": "ceph_vg1"
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:        }
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:    ],
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:    "2": [
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:        {
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:            "devices": [
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "/dev/loop5"
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:            ],
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:            "lv_name": "ceph_lv2",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:            "lv_size": "21470642176",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:            "name": "ceph_lv2",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:            "tags": {
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "ceph.cluster_name": "ceph",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "ceph.crush_device_class": "",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "ceph.encrypted": "0",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "ceph.objectstore": "bluestore",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "ceph.osd_id": "2",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "ceph.type": "block",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "ceph.vdo": "0",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:                "ceph.with_tpm": "0"
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:            },
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:            "type": "block",
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:            "vg_name": "ceph_vg2"
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:        }
Jan 31 05:35:32 np0005603787 agitated_gates[256322]:    ]
Jan 31 05:35:32 np0005603787 agitated_gates[256322]: }
Jan 31 05:35:32 np0005603787 systemd[1]: libpod-73b1964ae82c1168161a56db3b9eb051c6aec1fd089a4243eb1182469168b77d.scope: Deactivated successfully.
Jan 31 05:35:32 np0005603787 podman[256306]: 2026-01-31 10:35:32.996636055 +0000 UTC m=+0.412122836 container died 73b1964ae82c1168161a56db3b9eb051c6aec1fd089a4243eb1182469168b77d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_gates, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:35:33 np0005603787 systemd[1]: var-lib-containers-storage-overlay-66f87d32caf5a805fa3ae5c2ec34167982e743f5c582e22ef86274be45243e76-merged.mount: Deactivated successfully.
Jan 31 05:35:33 np0005603787 podman[256306]: 2026-01-31 10:35:33.054875615 +0000 UTC m=+0.470362356 container remove 73b1964ae82c1168161a56db3b9eb051c6aec1fd089a4243eb1182469168b77d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_gates, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 05:35:33 np0005603787 systemd[1]: libpod-conmon-73b1964ae82c1168161a56db3b9eb051c6aec1fd089a4243eb1182469168b77d.scope: Deactivated successfully.
Jan 31 05:35:33 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1263: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:35:33 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:35:33 np0005603787 podman[256406]: 2026-01-31 10:35:33.48283866 +0000 UTC m=+0.037003556 container create bf9158524bb7a0f5c955fb69291482fa633d622b3cf07862585476cabeb87390 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 05:35:33 np0005603787 systemd[1]: Started libpod-conmon-bf9158524bb7a0f5c955fb69291482fa633d622b3cf07862585476cabeb87390.scope.
Jan 31 05:35:33 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:35:33 np0005603787 podman[256406]: 2026-01-31 10:35:33.534218055 +0000 UTC m=+0.088382971 container init bf9158524bb7a0f5c955fb69291482fa633d622b3cf07862585476cabeb87390 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 05:35:33 np0005603787 podman[256406]: 2026-01-31 10:35:33.540587047 +0000 UTC m=+0.094751953 container start bf9158524bb7a0f5c955fb69291482fa633d622b3cf07862585476cabeb87390 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 05:35:33 np0005603787 interesting_sinoussi[256422]: 167 167
Jan 31 05:35:33 np0005603787 podman[256406]: 2026-01-31 10:35:33.543558948 +0000 UTC m=+0.097723874 container attach bf9158524bb7a0f5c955fb69291482fa633d622b3cf07862585476cabeb87390 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_sinoussi, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:35:33 np0005603787 systemd[1]: libpod-bf9158524bb7a0f5c955fb69291482fa633d622b3cf07862585476cabeb87390.scope: Deactivated successfully.
Jan 31 05:35:33 np0005603787 podman[256406]: 2026-01-31 10:35:33.544913505 +0000 UTC m=+0.099078401 container died bf9158524bb7a0f5c955fb69291482fa633d622b3cf07862585476cabeb87390 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_sinoussi, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True)
Jan 31 05:35:33 np0005603787 podman[256406]: 2026-01-31 10:35:33.463829334 +0000 UTC m=+0.017994250 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:35:33 np0005603787 systemd[1]: var-lib-containers-storage-overlay-4f1cb13bf1da2a793ef8a3bc1cea2e7c6760ec811adb122d195f97cfdfe50c10-merged.mount: Deactivated successfully.
Jan 31 05:35:33 np0005603787 podman[256406]: 2026-01-31 10:35:33.574169349 +0000 UTC m=+0.128334245 container remove bf9158524bb7a0f5c955fb69291482fa633d622b3cf07862585476cabeb87390 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_sinoussi, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:35:33 np0005603787 systemd[1]: libpod-conmon-bf9158524bb7a0f5c955fb69291482fa633d622b3cf07862585476cabeb87390.scope: Deactivated successfully.
Jan 31 05:35:33 np0005603787 podman[256446]: 2026-01-31 10:35:33.700527868 +0000 UTC m=+0.037903260 container create d348ae4fb653abf271eecc0b687e00fc6bbee00d8e2783c4b99a48271634d2a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 05:35:33 np0005603787 systemd[1]: Started libpod-conmon-d348ae4fb653abf271eecc0b687e00fc6bbee00d8e2783c4b99a48271634d2a5.scope.
Jan 31 05:35:33 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:35:33 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d263d4aa105f267155ec8ed6fea90cd68fbf6bec61e172208544fb51f96eb0b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:35:33 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d263d4aa105f267155ec8ed6fea90cd68fbf6bec61e172208544fb51f96eb0b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:35:33 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d263d4aa105f267155ec8ed6fea90cd68fbf6bec61e172208544fb51f96eb0b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:35:33 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d263d4aa105f267155ec8ed6fea90cd68fbf6bec61e172208544fb51f96eb0b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:35:33 np0005603787 podman[256446]: 2026-01-31 10:35:33.684500683 +0000 UTC m=+0.021876085 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:35:33 np0005603787 podman[256446]: 2026-01-31 10:35:33.801759026 +0000 UTC m=+0.139134468 container init d348ae4fb653abf271eecc0b687e00fc6bbee00d8e2783c4b99a48271634d2a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:35:33 np0005603787 podman[256446]: 2026-01-31 10:35:33.809893456 +0000 UTC m=+0.147268888 container start d348ae4fb653abf271eecc0b687e00fc6bbee00d8e2783c4b99a48271634d2a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_sutherland, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:35:33 np0005603787 podman[256446]: 2026-01-31 10:35:33.813240297 +0000 UTC m=+0.150615689 container attach d348ae4fb653abf271eecc0b687e00fc6bbee00d8e2783c4b99a48271634d2a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_sutherland, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 05:35:34 np0005603787 lvm[256542]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:35:34 np0005603787 lvm[256542]: VG ceph_vg1 finished
Jan 31 05:35:34 np0005603787 lvm[256540]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:35:34 np0005603787 lvm[256540]: VG ceph_vg0 finished
Jan 31 05:35:34 np0005603787 lvm[256544]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:35:34 np0005603787 lvm[256544]: VG ceph_vg2 finished
Jan 31 05:35:34 np0005603787 affectionate_sutherland[256463]: {}
Jan 31 05:35:34 np0005603787 systemd[1]: libpod-d348ae4fb653abf271eecc0b687e00fc6bbee00d8e2783c4b99a48271634d2a5.scope: Deactivated successfully.
Jan 31 05:35:34 np0005603787 podman[256446]: 2026-01-31 10:35:34.556182649 +0000 UTC m=+0.893558051 container died d348ae4fb653abf271eecc0b687e00fc6bbee00d8e2783c4b99a48271634d2a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 05:35:34 np0005603787 systemd[1]: libpod-d348ae4fb653abf271eecc0b687e00fc6bbee00d8e2783c4b99a48271634d2a5.scope: Consumed 1.085s CPU time.
Jan 31 05:35:34 np0005603787 systemd[1]: var-lib-containers-storage-overlay-d263d4aa105f267155ec8ed6fea90cd68fbf6bec61e172208544fb51f96eb0b9-merged.mount: Deactivated successfully.
Jan 31 05:35:34 np0005603787 podman[256446]: 2026-01-31 10:35:34.603011431 +0000 UTC m=+0.940386843 container remove d348ae4fb653abf271eecc0b687e00fc6bbee00d8e2783c4b99a48271634d2a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:35:34 np0005603787 systemd[1]: libpod-conmon-d348ae4fb653abf271eecc0b687e00fc6bbee00d8e2783c4b99a48271634d2a5.scope: Deactivated successfully.
Jan 31 05:35:34 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:35:34 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:35:34 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:35:34 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:35:35 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:35:35 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:35:35 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:35:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:35:37.081 154765 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:35:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:35:37.082 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:35:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:35:37.082 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:35:37 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1265: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:35:38 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:35:39 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:35:41 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:35:43 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1268: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:35:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_10:35:43
Jan 31 05:35:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:35:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] do_upmap
Jan 31 05:35:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'backups', 'default.rgw.meta', 'volumes', '.mgr', 'vms']
Jan 31 05:35:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:35:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:35:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:35:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:35:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:35:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:35:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:35:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:35:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:35:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:35:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:35:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:35:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:35:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:35:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:35:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:35:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:35:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:35:45 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:35:47 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1270: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:35:48 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:35:49 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1271: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:35:51 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:35:51 np0005603787 podman[256587]: 2026-01-31 10:35:51.842833028 +0000 UTC m=+0.056641309 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 05:35:51 np0005603787 podman[256586]: 2026-01-31 10:35:51.871400463 +0000 UTC m=+0.088385320 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 31 05:35:53 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1273: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:35:53 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:35:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:35:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:35:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:35:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:35:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:35:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:35:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:35:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:35:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:35:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:35:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 2.450943614167069e-07 of space, bias 1.0, pg target 7.352830842501207e-05 quantized to 32 (current 32)
Jan 31 05:35:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:35:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.527403468629877e-06 of space, bias 4.0, pg target 0.0018328841623558524 quantized to 16 (current 16)
Jan 31 05:35:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:35:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:35:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:35:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:35:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:35:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 05:35:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:35:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:35:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:35:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:35:55 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:35:57 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:35:58 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:35:59 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:36:01 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1277: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:36:03 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:36:03 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:36:05 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1279: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:36:07 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1280: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:36:08 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:36:09 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1281: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:36:11 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1282: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:36:13 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1283: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:36:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:36:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:36:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:36:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:36:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:36:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:36:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:36:14 np0005603787 nova_compute[238603]: 2026-01-31 10:36:14.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:36:14 np0005603787 nova_compute[238603]: 2026-01-31 10:36:14.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:36:14 np0005603787 nova_compute[238603]: 2026-01-31 10:36:14.103 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 05:36:15 np0005603787 nova_compute[238603]: 2026-01-31 10:36:15.104 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:36:15 np0005603787 nova_compute[238603]: 2026-01-31 10:36:15.104 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 05:36:15 np0005603787 nova_compute[238603]: 2026-01-31 10:36:15.105 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 05:36:15 np0005603787 nova_compute[238603]: 2026-01-31 10:36:15.128 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 05:36:15 np0005603787 nova_compute[238603]: 2026-01-31 10:36:15.128 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:36:15 np0005603787 nova_compute[238603]: 2026-01-31 10:36:15.129 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:36:15 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1284: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:36:17 np0005603787 nova_compute[238603]: 2026-01-31 10:36:17.103 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:36:17 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1285: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:36:18 np0005603787 nova_compute[238603]: 2026-01-31 10:36:18.098 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:36:18 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:36:19 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:36:21 np0005603787 nova_compute[238603]: 2026-01-31 10:36:21.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:36:21 np0005603787 nova_compute[238603]: 2026-01-31 10:36:21.131 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:36:21 np0005603787 nova_compute[238603]: 2026-01-31 10:36:21.132 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:36:21 np0005603787 nova_compute[238603]: 2026-01-31 10:36:21.132 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:36:21 np0005603787 nova_compute[238603]: 2026-01-31 10:36:21.133 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 05:36:21 np0005603787 nova_compute[238603]: 2026-01-31 10:36:21.133 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:36:21 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1287: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:36:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:36:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4070504928' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:36:21 np0005603787 nova_compute[238603]: 2026-01-31 10:36:21.647 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:36:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 05:36:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2569613826' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 05:36:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 05:36:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2569613826' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 05:36:21 np0005603787 nova_compute[238603]: 2026-01-31 10:36:21.798 238607 WARNING nova.virt.libvirt.driver [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 05:36:21 np0005603787 nova_compute[238603]: 2026-01-31 10:36:21.799 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5117MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 05:36:21 np0005603787 nova_compute[238603]: 2026-01-31 10:36:21.799 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:36:21 np0005603787 nova_compute[238603]: 2026-01-31 10:36:21.799 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:36:21 np0005603787 nova_compute[238603]: 2026-01-31 10:36:21.890 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 05:36:21 np0005603787 nova_compute[238603]: 2026-01-31 10:36:21.890 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 05:36:21 np0005603787 nova_compute[238603]: 2026-01-31 10:36:21.911 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:36:22 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:36:22 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3331591295' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:36:22 np0005603787 nova_compute[238603]: 2026-01-31 10:36:22.467 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:36:22 np0005603787 nova_compute[238603]: 2026-01-31 10:36:22.472 238607 DEBUG nova.compute.provider_tree [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed in ProviderTree for provider: 207962d2-1ba9-4db2-8533-2a30e7131f3e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 05:36:22 np0005603787 nova_compute[238603]: 2026-01-31 10:36:22.484 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed for provider 207962d2-1ba9-4db2-8533-2a30e7131f3e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 05:36:22 np0005603787 nova_compute[238603]: 2026-01-31 10:36:22.486 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 05:36:22 np0005603787 nova_compute[238603]: 2026-01-31 10:36:22.486 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:36:22 np0005603787 podman[256676]: 2026-01-31 10:36:22.866802904 +0000 UTC m=+0.076775194 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 05:36:22 np0005603787 podman[256675]: 2026-01-31 10:36:22.866797383 +0000 UTC m=+0.080070833 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 05:36:23 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:36:23 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:36:25 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1289: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:36:27 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1290: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:36:27 np0005603787 nova_compute[238603]: 2026-01-31 10:36:27.486 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:36:28 np0005603787 nova_compute[238603]: 2026-01-31 10:36:28.098 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:36:28 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:36:29 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:36:31 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1292: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:36:33 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:36:33 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:36:35 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1294: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:36:35 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:36:35 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:36:35 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:36:35 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:36:35 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:36:35 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:36:35 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:36:35 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:36:35 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:36:35 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:36:35 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:36:35 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:36:35 np0005603787 podman[256860]: 2026-01-31 10:36:35.764205138 +0000 UTC m=+0.036362948 container create d4bcd96b12850491b4122a0d7e3ec583b98aac51c996e8bec343b2af87bbd1fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 05:36:35 np0005603787 systemd[1]: Started libpod-conmon-d4bcd96b12850491b4122a0d7e3ec583b98aac51c996e8bec343b2af87bbd1fe.scope.
Jan 31 05:36:35 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:36:35 np0005603787 podman[256860]: 2026-01-31 10:36:35.840488048 +0000 UTC m=+0.112645918 container init d4bcd96b12850491b4122a0d7e3ec583b98aac51c996e8bec343b2af87bbd1fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_elbakyan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 05:36:35 np0005603787 podman[256860]: 2026-01-31 10:36:35.745424318 +0000 UTC m=+0.017582148 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:36:35 np0005603787 podman[256860]: 2026-01-31 10:36:35.846329597 +0000 UTC m=+0.118487397 container start d4bcd96b12850491b4122a0d7e3ec583b98aac51c996e8bec343b2af87bbd1fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_elbakyan, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 05:36:35 np0005603787 podman[256860]: 2026-01-31 10:36:35.850181241 +0000 UTC m=+0.122339051 container attach d4bcd96b12850491b4122a0d7e3ec583b98aac51c996e8bec343b2af87bbd1fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_elbakyan, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:36:35 np0005603787 lucid_elbakyan[256877]: 167 167
Jan 31 05:36:35 np0005603787 systemd[1]: libpod-d4bcd96b12850491b4122a0d7e3ec583b98aac51c996e8bec343b2af87bbd1fe.scope: Deactivated successfully.
Jan 31 05:36:35 np0005603787 conmon[256877]: conmon d4bcd96b12850491b412 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d4bcd96b12850491b4122a0d7e3ec583b98aac51c996e8bec343b2af87bbd1fe.scope/container/memory.events
Jan 31 05:36:35 np0005603787 podman[256860]: 2026-01-31 10:36:35.853861571 +0000 UTC m=+0.126019391 container died d4bcd96b12850491b4122a0d7e3ec583b98aac51c996e8bec343b2af87bbd1fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:36:35 np0005603787 systemd[1]: var-lib-containers-storage-overlay-fc6d83383a93c97998f06b9488a63d30664f2c7b24c3c6ed3420d948e65cd333-merged.mount: Deactivated successfully.
Jan 31 05:36:35 np0005603787 podman[256860]: 2026-01-31 10:36:35.898729489 +0000 UTC m=+0.170887269 container remove d4bcd96b12850491b4122a0d7e3ec583b98aac51c996e8bec343b2af87bbd1fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_elbakyan, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:36:35 np0005603787 systemd[1]: libpod-conmon-d4bcd96b12850491b4122a0d7e3ec583b98aac51c996e8bec343b2af87bbd1fe.scope: Deactivated successfully.
Jan 31 05:36:36 np0005603787 podman[256901]: 2026-01-31 10:36:36.020980006 +0000 UTC m=+0.042172365 container create 24735032d59069cee06876b27f6f778c3f3ad5ee9165fdbb6e68e1698e0c27d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_sutherland, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 05:36:36 np0005603787 systemd[1]: Started libpod-conmon-24735032d59069cee06876b27f6f778c3f3ad5ee9165fdbb6e68e1698e0c27d1.scope.
Jan 31 05:36:36 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:36:36 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac1e0634585ea4998b429e02872c24c3cfb9a7db3699229174e5848ad24a48c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:36:36 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac1e0634585ea4998b429e02872c24c3cfb9a7db3699229174e5848ad24a48c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:36:36 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac1e0634585ea4998b429e02872c24c3cfb9a7db3699229174e5848ad24a48c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:36:36 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac1e0634585ea4998b429e02872c24c3cfb9a7db3699229174e5848ad24a48c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:36:36 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac1e0634585ea4998b429e02872c24c3cfb9a7db3699229174e5848ad24a48c5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:36:36 np0005603787 podman[256901]: 2026-01-31 10:36:36.003841431 +0000 UTC m=+0.025033820 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:36:36 np0005603787 podman[256901]: 2026-01-31 10:36:36.107180766 +0000 UTC m=+0.128373135 container init 24735032d59069cee06876b27f6f778c3f3ad5ee9165fdbb6e68e1698e0c27d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:36:36 np0005603787 podman[256901]: 2026-01-31 10:36:36.11137822 +0000 UTC m=+0.132570569 container start 24735032d59069cee06876b27f6f778c3f3ad5ee9165fdbb6e68e1698e0c27d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_sutherland, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:36:36 np0005603787 podman[256901]: 2026-01-31 10:36:36.114158415 +0000 UTC m=+0.135350764 container attach 24735032d59069cee06876b27f6f778c3f3ad5ee9165fdbb6e68e1698e0c27d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_sutherland, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:36:36 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:36:36 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:36:36 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:36:36 np0005603787 cranky_sutherland[256917]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:36:36 np0005603787 cranky_sutherland[256917]: --> All data devices are unavailable
Jan 31 05:36:36 np0005603787 systemd[1]: libpod-24735032d59069cee06876b27f6f778c3f3ad5ee9165fdbb6e68e1698e0c27d1.scope: Deactivated successfully.
Jan 31 05:36:36 np0005603787 podman[256937]: 2026-01-31 10:36:36.544441162 +0000 UTC m=+0.023567130 container died 24735032d59069cee06876b27f6f778c3f3ad5ee9165fdbb6e68e1698e0c27d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:36:36 np0005603787 systemd[1]: var-lib-containers-storage-overlay-ac1e0634585ea4998b429e02872c24c3cfb9a7db3699229174e5848ad24a48c5-merged.mount: Deactivated successfully.
Jan 31 05:36:36 np0005603787 podman[256937]: 2026-01-31 10:36:36.590649317 +0000 UTC m=+0.069775205 container remove 24735032d59069cee06876b27f6f778c3f3ad5ee9165fdbb6e68e1698e0c27d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_sutherland, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 05:36:36 np0005603787 systemd[1]: libpod-conmon-24735032d59069cee06876b27f6f778c3f3ad5ee9165fdbb6e68e1698e0c27d1.scope: Deactivated successfully.
Jan 31 05:36:37 np0005603787 podman[257014]: 2026-01-31 10:36:37.03661338 +0000 UTC m=+0.047584872 container create 09d025388cc27ab6126f7ca5502e447d59e562886781c4b29f7b701c41d0d082 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 05:36:37 np0005603787 systemd[1]: Started libpod-conmon-09d025388cc27ab6126f7ca5502e447d59e562886781c4b29f7b701c41d0d082.scope.
Jan 31 05:36:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:36:37.082 154765 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:36:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:36:37.084 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:36:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:36:37.084 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:36:37 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:36:37 np0005603787 podman[257014]: 2026-01-31 10:36:37.017967283 +0000 UTC m=+0.028938775 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:36:37 np0005603787 podman[257014]: 2026-01-31 10:36:37.11731474 +0000 UTC m=+0.128286322 container init 09d025388cc27ab6126f7ca5502e447d59e562886781c4b29f7b701c41d0d082 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_khorana, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 05:36:37 np0005603787 podman[257014]: 2026-01-31 10:36:37.125594615 +0000 UTC m=+0.136566117 container start 09d025388cc27ab6126f7ca5502e447d59e562886781c4b29f7b701c41d0d082 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_khorana, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:36:37 np0005603787 zen_khorana[257031]: 167 167
Jan 31 05:36:37 np0005603787 systemd[1]: libpod-09d025388cc27ab6126f7ca5502e447d59e562886781c4b29f7b701c41d0d082.scope: Deactivated successfully.
Jan 31 05:36:37 np0005603787 podman[257014]: 2026-01-31 10:36:37.132704737 +0000 UTC m=+0.143676289 container attach 09d025388cc27ab6126f7ca5502e447d59e562886781c4b29f7b701c41d0d082 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_khorana, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:36:37 np0005603787 podman[257014]: 2026-01-31 10:36:37.13428826 +0000 UTC m=+0.145259772 container died 09d025388cc27ab6126f7ca5502e447d59e562886781c4b29f7b701c41d0d082 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_khorana, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Jan 31 05:36:37 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1295: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:36:37 np0005603787 systemd[1]: var-lib-containers-storage-overlay-600378511e8cd7452496d727cc0df72418031842b715e64520402f08fd983f02-merged.mount: Deactivated successfully.
Jan 31 05:36:37 np0005603787 podman[257014]: 2026-01-31 10:36:37.256426136 +0000 UTC m=+0.267397618 container remove 09d025388cc27ab6126f7ca5502e447d59e562886781c4b29f7b701c41d0d082 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 05:36:37 np0005603787 systemd[1]: libpod-conmon-09d025388cc27ab6126f7ca5502e447d59e562886781c4b29f7b701c41d0d082.scope: Deactivated successfully.
Jan 31 05:36:37 np0005603787 podman[257055]: 2026-01-31 10:36:37.389041724 +0000 UTC m=+0.046881683 container create 8b013b5410c8e6178eb246af4fcac9d67145f8c542ffb0a8537a3110b2edb415 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:36:37 np0005603787 systemd[1]: Started libpod-conmon-8b013b5410c8e6178eb246af4fcac9d67145f8c542ffb0a8537a3110b2edb415.scope.
Jan 31 05:36:37 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:36:37 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae9e5a9e5b917aaf1c79c2ba600cac232dc8179ae0b65a5e124757680db67f5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:36:37 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae9e5a9e5b917aaf1c79c2ba600cac232dc8179ae0b65a5e124757680db67f5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:36:37 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae9e5a9e5b917aaf1c79c2ba600cac232dc8179ae0b65a5e124757680db67f5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:36:37 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae9e5a9e5b917aaf1c79c2ba600cac232dc8179ae0b65a5e124757680db67f5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:36:37 np0005603787 podman[257055]: 2026-01-31 10:36:37.373901264 +0000 UTC m=+0.031741243 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:36:37 np0005603787 podman[257055]: 2026-01-31 10:36:37.474896475 +0000 UTC m=+0.132736494 container init 8b013b5410c8e6178eb246af4fcac9d67145f8c542ffb0a8537a3110b2edb415 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_gates, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:36:37 np0005603787 podman[257055]: 2026-01-31 10:36:37.481114263 +0000 UTC m=+0.138954232 container start 8b013b5410c8e6178eb246af4fcac9d67145f8c542ffb0a8537a3110b2edb415 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_gates, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 31 05:36:37 np0005603787 podman[257055]: 2026-01-31 10:36:37.486254253 +0000 UTC m=+0.144094222 container attach 8b013b5410c8e6178eb246af4fcac9d67145f8c542ffb0a8537a3110b2edb415 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_gates, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 05:36:37 np0005603787 determined_gates[257072]: {
Jan 31 05:36:37 np0005603787 determined_gates[257072]:    "0": [
Jan 31 05:36:37 np0005603787 determined_gates[257072]:        {
Jan 31 05:36:37 np0005603787 determined_gates[257072]:            "devices": [
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "/dev/loop3"
Jan 31 05:36:37 np0005603787 determined_gates[257072]:            ],
Jan 31 05:36:37 np0005603787 determined_gates[257072]:            "lv_name": "ceph_lv0",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:            "lv_size": "21470642176",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:            "name": "ceph_lv0",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:            "tags": {
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "ceph.cluster_name": "ceph",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "ceph.crush_device_class": "",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "ceph.encrypted": "0",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "ceph.objectstore": "bluestore",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "ceph.osd_id": "0",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "ceph.type": "block",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "ceph.vdo": "0",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "ceph.with_tpm": "0"
Jan 31 05:36:37 np0005603787 determined_gates[257072]:            },
Jan 31 05:36:37 np0005603787 determined_gates[257072]:            "type": "block",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:            "vg_name": "ceph_vg0"
Jan 31 05:36:37 np0005603787 determined_gates[257072]:        }
Jan 31 05:36:37 np0005603787 determined_gates[257072]:    ],
Jan 31 05:36:37 np0005603787 determined_gates[257072]:    "1": [
Jan 31 05:36:37 np0005603787 determined_gates[257072]:        {
Jan 31 05:36:37 np0005603787 determined_gates[257072]:            "devices": [
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "/dev/loop4"
Jan 31 05:36:37 np0005603787 determined_gates[257072]:            ],
Jan 31 05:36:37 np0005603787 determined_gates[257072]:            "lv_name": "ceph_lv1",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:            "lv_size": "21470642176",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:            "name": "ceph_lv1",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:            "tags": {
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "ceph.cluster_name": "ceph",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "ceph.crush_device_class": "",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "ceph.encrypted": "0",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "ceph.objectstore": "bluestore",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "ceph.osd_id": "1",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "ceph.type": "block",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "ceph.vdo": "0",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "ceph.with_tpm": "0"
Jan 31 05:36:37 np0005603787 determined_gates[257072]:            },
Jan 31 05:36:37 np0005603787 determined_gates[257072]:            "type": "block",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:            "vg_name": "ceph_vg1"
Jan 31 05:36:37 np0005603787 determined_gates[257072]:        }
Jan 31 05:36:37 np0005603787 determined_gates[257072]:    ],
Jan 31 05:36:37 np0005603787 determined_gates[257072]:    "2": [
Jan 31 05:36:37 np0005603787 determined_gates[257072]:        {
Jan 31 05:36:37 np0005603787 determined_gates[257072]:            "devices": [
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "/dev/loop5"
Jan 31 05:36:37 np0005603787 determined_gates[257072]:            ],
Jan 31 05:36:37 np0005603787 determined_gates[257072]:            "lv_name": "ceph_lv2",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:            "lv_size": "21470642176",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:            "name": "ceph_lv2",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:            "tags": {
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "ceph.cluster_name": "ceph",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "ceph.crush_device_class": "",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "ceph.encrypted": "0",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "ceph.objectstore": "bluestore",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "ceph.osd_id": "2",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "ceph.type": "block",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "ceph.vdo": "0",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:                "ceph.with_tpm": "0"
Jan 31 05:36:37 np0005603787 determined_gates[257072]:            },
Jan 31 05:36:37 np0005603787 determined_gates[257072]:            "type": "block",
Jan 31 05:36:37 np0005603787 determined_gates[257072]:            "vg_name": "ceph_vg2"
Jan 31 05:36:37 np0005603787 determined_gates[257072]:        }
Jan 31 05:36:37 np0005603787 determined_gates[257072]:    ]
Jan 31 05:36:37 np0005603787 determined_gates[257072]: }
Jan 31 05:36:37 np0005603787 systemd[1]: libpod-8b013b5410c8e6178eb246af4fcac9d67145f8c542ffb0a8537a3110b2edb415.scope: Deactivated successfully.
Jan 31 05:36:37 np0005603787 podman[257055]: 2026-01-31 10:36:37.76950213 +0000 UTC m=+0.427342089 container died 8b013b5410c8e6178eb246af4fcac9d67145f8c542ffb0a8537a3110b2edb415 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_gates, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:36:37 np0005603787 systemd[1]: var-lib-containers-storage-overlay-ae9e5a9e5b917aaf1c79c2ba600cac232dc8179ae0b65a5e124757680db67f5e-merged.mount: Deactivated successfully.
Jan 31 05:36:37 np0005603787 podman[257055]: 2026-01-31 10:36:37.812588269 +0000 UTC m=+0.470428248 container remove 8b013b5410c8e6178eb246af4fcac9d67145f8c542ffb0a8537a3110b2edb415 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 05:36:37 np0005603787 systemd[1]: libpod-conmon-8b013b5410c8e6178eb246af4fcac9d67145f8c542ffb0a8537a3110b2edb415.scope: Deactivated successfully.
Jan 31 05:36:38 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:36:38 np0005603787 podman[257156]: 2026-01-31 10:36:38.237570073 +0000 UTC m=+0.036436430 container create f47144fed379e8f92f414631878fb7db5f33bdb1c213bdbd38ebf65817db097b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 05:36:38 np0005603787 systemd[1]: Started libpod-conmon-f47144fed379e8f92f414631878fb7db5f33bdb1c213bdbd38ebf65817db097b.scope.
Jan 31 05:36:38 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:36:38 np0005603787 podman[257156]: 2026-01-31 10:36:38.297018707 +0000 UTC m=+0.095885084 container init f47144fed379e8f92f414631878fb7db5f33bdb1c213bdbd38ebf65817db097b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:36:38 np0005603787 podman[257156]: 2026-01-31 10:36:38.303816401 +0000 UTC m=+0.102682788 container start f47144fed379e8f92f414631878fb7db5f33bdb1c213bdbd38ebf65817db097b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:36:38 np0005603787 angry_ritchie[257173]: 167 167
Jan 31 05:36:38 np0005603787 systemd[1]: libpod-f47144fed379e8f92f414631878fb7db5f33bdb1c213bdbd38ebf65817db097b.scope: Deactivated successfully.
Jan 31 05:36:38 np0005603787 podman[257156]: 2026-01-31 10:36:38.30748355 +0000 UTC m=+0.106349907 container attach f47144fed379e8f92f414631878fb7db5f33bdb1c213bdbd38ebf65817db097b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ritchie, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:36:38 np0005603787 conmon[257173]: conmon f47144fed379e8f92f41 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f47144fed379e8f92f414631878fb7db5f33bdb1c213bdbd38ebf65817db097b.scope/container/memory.events
Jan 31 05:36:38 np0005603787 podman[257156]: 2026-01-31 10:36:38.308909689 +0000 UTC m=+0.107776086 container died f47144fed379e8f92f414631878fb7db5f33bdb1c213bdbd38ebf65817db097b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ritchie, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 05:36:38 np0005603787 podman[257156]: 2026-01-31 10:36:38.223640324 +0000 UTC m=+0.022506711 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:36:38 np0005603787 systemd[1]: var-lib-containers-storage-overlay-70d6d25c36135ff1181a07f7619272ef581b983cf69b8ec3da4fa65f1eb244f6-merged.mount: Deactivated successfully.
Jan 31 05:36:38 np0005603787 podman[257156]: 2026-01-31 10:36:38.34579573 +0000 UTC m=+0.144662087 container remove f47144fed379e8f92f414631878fb7db5f33bdb1c213bdbd38ebf65817db097b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 05:36:38 np0005603787 systemd[1]: libpod-conmon-f47144fed379e8f92f414631878fb7db5f33bdb1c213bdbd38ebf65817db097b.scope: Deactivated successfully.
Jan 31 05:36:38 np0005603787 podman[257196]: 2026-01-31 10:36:38.523283147 +0000 UTC m=+0.063750871 container create db6c46c08e5a5d993d9f05390461cfb1aa4c0b1f403a03e90231c2dfb3a74ea8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:36:38 np0005603787 systemd[1]: Started libpod-conmon-db6c46c08e5a5d993d9f05390461cfb1aa4c0b1f403a03e90231c2dfb3a74ea8.scope.
Jan 31 05:36:38 np0005603787 podman[257196]: 2026-01-31 10:36:38.496685266 +0000 UTC m=+0.037153040 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:36:38 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:36:38 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a21a222ddc7f458a58babe08b9a98b38ba66c9e9195e58d7e30b459e0d16e734/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:36:38 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a21a222ddc7f458a58babe08b9a98b38ba66c9e9195e58d7e30b459e0d16e734/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:36:38 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a21a222ddc7f458a58babe08b9a98b38ba66c9e9195e58d7e30b459e0d16e734/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:36:38 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a21a222ddc7f458a58babe08b9a98b38ba66c9e9195e58d7e30b459e0d16e734/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:36:38 np0005603787 podman[257196]: 2026-01-31 10:36:38.614176313 +0000 UTC m=+0.154644037 container init db6c46c08e5a5d993d9f05390461cfb1aa4c0b1f403a03e90231c2dfb3a74ea8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 05:36:38 np0005603787 podman[257196]: 2026-01-31 10:36:38.620826004 +0000 UTC m=+0.161293718 container start db6c46c08e5a5d993d9f05390461cfb1aa4c0b1f403a03e90231c2dfb3a74ea8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_ishizaka, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default)
Jan 31 05:36:38 np0005603787 podman[257196]: 2026-01-31 10:36:38.62471245 +0000 UTC m=+0.165180144 container attach db6c46c08e5a5d993d9f05390461cfb1aa4c0b1f403a03e90231c2dfb3a74ea8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_ishizaka, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:36:39 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1296: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:36:39 np0005603787 lvm[257291]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:36:39 np0005603787 lvm[257292]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:36:39 np0005603787 lvm[257292]: VG ceph_vg1 finished
Jan 31 05:36:39 np0005603787 lvm[257291]: VG ceph_vg0 finished
Jan 31 05:36:39 np0005603787 lvm[257294]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:36:39 np0005603787 lvm[257294]: VG ceph_vg2 finished
Jan 31 05:36:39 np0005603787 fervent_ishizaka[257212]: {}
Jan 31 05:36:39 np0005603787 systemd[1]: libpod-db6c46c08e5a5d993d9f05390461cfb1aa4c0b1f403a03e90231c2dfb3a74ea8.scope: Deactivated successfully.
Jan 31 05:36:39 np0005603787 systemd[1]: libpod-db6c46c08e5a5d993d9f05390461cfb1aa4c0b1f403a03e90231c2dfb3a74ea8.scope: Consumed 1.090s CPU time.
Jan 31 05:36:39 np0005603787 podman[257196]: 2026-01-31 10:36:39.380731358 +0000 UTC m=+0.921199062 container died db6c46c08e5a5d993d9f05390461cfb1aa4c0b1f403a03e90231c2dfb3a74ea8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_ishizaka, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 05:36:39 np0005603787 systemd[1]: var-lib-containers-storage-overlay-a21a222ddc7f458a58babe08b9a98b38ba66c9e9195e58d7e30b459e0d16e734-merged.mount: Deactivated successfully.
Jan 31 05:36:39 np0005603787 podman[257196]: 2026-01-31 10:36:39.42687444 +0000 UTC m=+0.967342124 container remove db6c46c08e5a5d993d9f05390461cfb1aa4c0b1f403a03e90231c2dfb3a74ea8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_ishizaka, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:36:39 np0005603787 systemd[1]: libpod-conmon-db6c46c08e5a5d993d9f05390461cfb1aa4c0b1f403a03e90231c2dfb3a74ea8.scope: Deactivated successfully.
Jan 31 05:36:39 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:36:39 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:36:39 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:36:39 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:36:40 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:36:40 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:36:41 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1297: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:36:43 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1298: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:36:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_10:36:43
Jan 31 05:36:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:36:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] do_upmap
Jan 31 05:36:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] pools ['images', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', 'default.rgw.meta', '.mgr', 'vms', 'backups']
Jan 31 05:36:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:36:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:36:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:36:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:36:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:36:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:36:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:36:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:36:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:36:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:36:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:36:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:36:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:36:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:36:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:36:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:36:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:36:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:36:45 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1299: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:36:47 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1300: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:36:48 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:36:49 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1301: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:36:51 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1302: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:36:53 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1303: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:36:53 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:36:53 np0005603787 podman[257335]: 2026-01-31 10:36:53.845309143 +0000 UTC m=+0.060488284 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 05:36:53 np0005603787 podman[257334]: 2026-01-31 10:36:53.892590425 +0000 UTC m=+0.108031723 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 31 05:36:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:36:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:36:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:36:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:36:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:36:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:36:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:36:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:36:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:36:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:36:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 2.450943614167069e-07 of space, bias 1.0, pg target 7.352830842501207e-05 quantized to 32 (current 32)
Jan 31 05:36:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:36:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.527403468629877e-06 of space, bias 4.0, pg target 0.0018328841623558524 quantized to 16 (current 16)
Jan 31 05:36:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:36:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:36:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:36:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:36:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:36:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 05:36:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:36:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:36:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:36:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:36:55 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1304: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:36:57 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1305: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:36:58 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:36:59 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1306: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:37:01 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1307: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:37:03 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1308: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:37:03 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:37:05 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1309: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:37:07 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1310: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:37:08 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:37:09 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:37:11 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1312: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:37:13 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1313: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:37:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:37:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:37:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:37:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:37:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:37:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:37:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:37:14 np0005603787 nova_compute[238603]: 2026-01-31 10:37:14.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:37:15 np0005603787 nova_compute[238603]: 2026-01-31 10:37:15.103 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:37:15 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:37:16 np0005603787 nova_compute[238603]: 2026-01-31 10:37:16.103 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:37:16 np0005603787 nova_compute[238603]: 2026-01-31 10:37:16.103 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:37:16 np0005603787 nova_compute[238603]: 2026-01-31 10:37:16.104 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 05:37:17 np0005603787 nova_compute[238603]: 2026-01-31 10:37:17.103 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:37:17 np0005603787 nova_compute[238603]: 2026-01-31 10:37:17.104 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 05:37:17 np0005603787 nova_compute[238603]: 2026-01-31 10:37:17.104 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 05:37:17 np0005603787 nova_compute[238603]: 2026-01-31 10:37:17.122 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 05:37:17 np0005603787 nova_compute[238603]: 2026-01-31 10:37:17.122 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:37:17 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1315: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:37:18 np0005603787 nova_compute[238603]: 2026-01-31 10:37:18.116 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:37:18 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:37:19 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:37:21 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1317: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:37:21 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 05:37:21 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 6027 writes, 26K keys, 6027 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 6027 writes, 6027 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1300 writes, 5671 keys, 1300 commit groups, 1.0 writes per commit group, ingest: 8.75 MB, 0.01 MB/s#012Interval WAL: 1300 writes, 1300 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     91.9      0.34              0.08        15    0.023       0      0       0.0       0.0#012  L6      1/0    7.40 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3    169.0    137.4      0.76              0.23        14    0.054     65K   7817       0.0       0.0#012 Sum      1/0    7.40 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3    116.5    123.2      1.10              0.31        29    0.038     65K   7817       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.8     79.5     79.4      0.38              0.07         6    0.063     16K   2028       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    169.0    137.4      0.76              0.23        14    0.054     65K   7817       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     92.7      0.34              0.08        14    0.024       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     15.9      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.031, interval 0.006#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.13 GB write, 0.06 MB/s write, 0.12 GB read, 0.05 MB/s read, 1.1 seconds#012Interval compaction: 0.03 GB write, 0.05 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b1fd4298d0#2 capacity: 304.00 MB usage: 13.98 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000183 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(869,13.46 MB,4.42725%) FilterBlock(30,187.23 KB,0.0601467%) IndexBlock(30,346.92 KB,0.111444%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 05:37:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 05:37:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2440015735' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 05:37:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 05:37:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2440015735' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 05:37:23 np0005603787 nova_compute[238603]: 2026-01-31 10:37:23.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:37:23 np0005603787 nova_compute[238603]: 2026-01-31 10:37:23.128 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:37:23 np0005603787 nova_compute[238603]: 2026-01-31 10:37:23.129 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:37:23 np0005603787 nova_compute[238603]: 2026-01-31 10:37:23.129 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:37:23 np0005603787 nova_compute[238603]: 2026-01-31 10:37:23.129 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 05:37:23 np0005603787 nova_compute[238603]: 2026-01-31 10:37:23.130 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:37:23 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1318: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:37:23 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:37:23 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:37:23 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1241210618' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:37:23 np0005603787 nova_compute[238603]: 2026-01-31 10:37:23.751 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:37:23 np0005603787 nova_compute[238603]: 2026-01-31 10:37:23.891 238607 WARNING nova.virt.libvirt.driver [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 05:37:23 np0005603787 nova_compute[238603]: 2026-01-31 10:37:23.892 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5121MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 05:37:23 np0005603787 nova_compute[238603]: 2026-01-31 10:37:23.892 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:37:23 np0005603787 nova_compute[238603]: 2026-01-31 10:37:23.892 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:37:23 np0005603787 nova_compute[238603]: 2026-01-31 10:37:23.964 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 05:37:23 np0005603787 nova_compute[238603]: 2026-01-31 10:37:23.965 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 05:37:23 np0005603787 nova_compute[238603]: 2026-01-31 10:37:23.977 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:37:24 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:37:24 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3693868052' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:37:24 np0005603787 nova_compute[238603]: 2026-01-31 10:37:24.494 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:37:24 np0005603787 nova_compute[238603]: 2026-01-31 10:37:24.500 238607 DEBUG nova.compute.provider_tree [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed in ProviderTree for provider: 207962d2-1ba9-4db2-8533-2a30e7131f3e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 05:37:24 np0005603787 nova_compute[238603]: 2026-01-31 10:37:24.521 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed for provider 207962d2-1ba9-4db2-8533-2a30e7131f3e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 05:37:24 np0005603787 nova_compute[238603]: 2026-01-31 10:37:24.523 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 05:37:24 np0005603787 nova_compute[238603]: 2026-01-31 10:37:24.523 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:37:24 np0005603787 podman[257422]: 2026-01-31 10:37:24.838245327 +0000 UTC m=+0.051878739 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 05:37:24 np0005603787 podman[257421]: 2026-01-31 10:37:24.868639332 +0000 UTC m=+0.081655196 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 05:37:25 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:37:27 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:37:28 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:37:29 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1321: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:37:29 np0005603787 nova_compute[238603]: 2026-01-31 10:37:29.523 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:37:31 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:37:33 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1323: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:37:33 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:37:35 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1324: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:37:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:37:37.082 154765 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:37:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:37:37.083 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:37:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:37:37.083 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:37:37 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:37:38 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:37:39 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:37:40 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:37:40 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:37:40 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:37:40 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:37:40 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:37:40 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:37:40 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:37:40 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:37:40 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:37:40 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:37:40 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:37:40 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:37:40 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:37:40 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:37:40 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:37:40 np0005603787 podman[257611]: 2026-01-31 10:37:40.669173025 +0000 UTC m=+0.098387721 container create a06b4b5713007265eec656460b465a0cd4adf2ac33b386d54febaae2f939212d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_roentgen, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:37:40 np0005603787 podman[257611]: 2026-01-31 10:37:40.604327956 +0000 UTC m=+0.033542642 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:37:40 np0005603787 systemd[1]: Started libpod-conmon-a06b4b5713007265eec656460b465a0cd4adf2ac33b386d54febaae2f939212d.scope.
Jan 31 05:37:40 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:37:40 np0005603787 podman[257611]: 2026-01-31 10:37:40.769618061 +0000 UTC m=+0.198832747 container init a06b4b5713007265eec656460b465a0cd4adf2ac33b386d54febaae2f939212d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 05:37:40 np0005603787 podman[257611]: 2026-01-31 10:37:40.780767624 +0000 UTC m=+0.209982290 container start a06b4b5713007265eec656460b465a0cd4adf2ac33b386d54febaae2f939212d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_roentgen, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:37:40 np0005603787 sharp_roentgen[257628]: 167 167
Jan 31 05:37:40 np0005603787 podman[257611]: 2026-01-31 10:37:40.787114736 +0000 UTC m=+0.216329402 container attach a06b4b5713007265eec656460b465a0cd4adf2ac33b386d54febaae2f939212d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_roentgen, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:37:40 np0005603787 systemd[1]: libpod-a06b4b5713007265eec656460b465a0cd4adf2ac33b386d54febaae2f939212d.scope: Deactivated successfully.
Jan 31 05:37:40 np0005603787 podman[257611]: 2026-01-31 10:37:40.789408928 +0000 UTC m=+0.218623584 container died a06b4b5713007265eec656460b465a0cd4adf2ac33b386d54febaae2f939212d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 05:37:40 np0005603787 systemd[1]: var-lib-containers-storage-overlay-b2e09a065be01a09d03fb17b778abc629ad9d5e6ffb7f40ada13e2b8e8bc2b06-merged.mount: Deactivated successfully.
Jan 31 05:37:40 np0005603787 podman[257611]: 2026-01-31 10:37:40.849415717 +0000 UTC m=+0.278630383 container remove a06b4b5713007265eec656460b465a0cd4adf2ac33b386d54febaae2f939212d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 05:37:40 np0005603787 systemd[1]: libpod-conmon-a06b4b5713007265eec656460b465a0cd4adf2ac33b386d54febaae2f939212d.scope: Deactivated successfully.
Jan 31 05:37:41 np0005603787 podman[257651]: 2026-01-31 10:37:41.007798065 +0000 UTC m=+0.058777686 container create 4dc969dacaf742fe803b64511c5319c0fa2630ff8db8a6e3fc932b87e50f06ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 05:37:41 np0005603787 systemd[1]: Started libpod-conmon-4dc969dacaf742fe803b64511c5319c0fa2630ff8db8a6e3fc932b87e50f06ac.scope.
Jan 31 05:37:41 np0005603787 podman[257651]: 2026-01-31 10:37:40.979311901 +0000 UTC m=+0.030291602 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:37:41 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:37:41 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30ae1b7cf5f42c1b9ba8aaa2dd5b27019e52a13499e6ea7e1331b13dcbf335d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:37:41 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30ae1b7cf5f42c1b9ba8aaa2dd5b27019e52a13499e6ea7e1331b13dcbf335d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:37:41 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30ae1b7cf5f42c1b9ba8aaa2dd5b27019e52a13499e6ea7e1331b13dcbf335d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:37:41 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30ae1b7cf5f42c1b9ba8aaa2dd5b27019e52a13499e6ea7e1331b13dcbf335d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:37:41 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30ae1b7cf5f42c1b9ba8aaa2dd5b27019e52a13499e6ea7e1331b13dcbf335d2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:37:41 np0005603787 podman[257651]: 2026-01-31 10:37:41.096226835 +0000 UTC m=+0.147206456 container init 4dc969dacaf742fe803b64511c5319c0fa2630ff8db8a6e3fc932b87e50f06ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 05:37:41 np0005603787 podman[257651]: 2026-01-31 10:37:41.10344423 +0000 UTC m=+0.154423891 container start 4dc969dacaf742fe803b64511c5319c0fa2630ff8db8a6e3fc932b87e50f06ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_beaver, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 05:37:41 np0005603787 podman[257651]: 2026-01-31 10:37:41.112017053 +0000 UTC m=+0.162996714 container attach 4dc969dacaf742fe803b64511c5319c0fa2630ff8db8a6e3fc932b87e50f06ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_beaver, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 05:37:41 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:37:41 np0005603787 infallible_beaver[257667]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:37:41 np0005603787 infallible_beaver[257667]: --> All data devices are unavailable
Jan 31 05:37:41 np0005603787 systemd[1]: libpod-4dc969dacaf742fe803b64511c5319c0fa2630ff8db8a6e3fc932b87e50f06ac.scope: Deactivated successfully.
Jan 31 05:37:41 np0005603787 podman[257651]: 2026-01-31 10:37:41.570982329 +0000 UTC m=+0.621961950 container died 4dc969dacaf742fe803b64511c5319c0fa2630ff8db8a6e3fc932b87e50f06ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_beaver, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 05:37:41 np0005603787 systemd[1]: var-lib-containers-storage-overlay-30ae1b7cf5f42c1b9ba8aaa2dd5b27019e52a13499e6ea7e1331b13dcbf335d2-merged.mount: Deactivated successfully.
Jan 31 05:37:41 np0005603787 podman[257651]: 2026-01-31 10:37:41.618321524 +0000 UTC m=+0.669301135 container remove 4dc969dacaf742fe803b64511c5319c0fa2630ff8db8a6e3fc932b87e50f06ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_beaver, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:37:41 np0005603787 systemd[1]: libpod-conmon-4dc969dacaf742fe803b64511c5319c0fa2630ff8db8a6e3fc932b87e50f06ac.scope: Deactivated successfully.
Jan 31 05:37:42 np0005603787 podman[257761]: 2026-01-31 10:37:42.091881176 +0000 UTC m=+0.041221529 container create a3d018668ab19a022c7193ad8b12e6a044dd7ae62c21b7d07d826ae7cb502c7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_goldstine, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 05:37:42 np0005603787 systemd[1]: Started libpod-conmon-a3d018668ab19a022c7193ad8b12e6a044dd7ae62c21b7d07d826ae7cb502c7b.scope.
Jan 31 05:37:42 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:37:42 np0005603787 podman[257761]: 2026-01-31 10:37:42.074991287 +0000 UTC m=+0.024331610 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:37:42 np0005603787 podman[257761]: 2026-01-31 10:37:42.176214134 +0000 UTC m=+0.125554497 container init a3d018668ab19a022c7193ad8b12e6a044dd7ae62c21b7d07d826ae7cb502c7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_goldstine, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:37:42 np0005603787 podman[257761]: 2026-01-31 10:37:42.182704121 +0000 UTC m=+0.132044474 container start a3d018668ab19a022c7193ad8b12e6a044dd7ae62c21b7d07d826ae7cb502c7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:37:42 np0005603787 podman[257761]: 2026-01-31 10:37:42.186821523 +0000 UTC m=+0.136161896 container attach a3d018668ab19a022c7193ad8b12e6a044dd7ae62c21b7d07d826ae7cb502c7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_goldstine, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 05:37:42 np0005603787 jolly_goldstine[257777]: 167 167
Jan 31 05:37:42 np0005603787 systemd[1]: libpod-a3d018668ab19a022c7193ad8b12e6a044dd7ae62c21b7d07d826ae7cb502c7b.scope: Deactivated successfully.
Jan 31 05:37:42 np0005603787 podman[257761]: 2026-01-31 10:37:42.189478065 +0000 UTC m=+0.138818388 container died a3d018668ab19a022c7193ad8b12e6a044dd7ae62c21b7d07d826ae7cb502c7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:37:42 np0005603787 systemd[1]: var-lib-containers-storage-overlay-06a6ba7b367fdaa3acc1ae2c8582296986d17fa12de05276a943efe067a860e2-merged.mount: Deactivated successfully.
Jan 31 05:37:42 np0005603787 podman[257761]: 2026-01-31 10:37:42.222553182 +0000 UTC m=+0.171893505 container remove a3d018668ab19a022c7193ad8b12e6a044dd7ae62c21b7d07d826ae7cb502c7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 05:37:42 np0005603787 systemd[1]: libpod-conmon-a3d018668ab19a022c7193ad8b12e6a044dd7ae62c21b7d07d826ae7cb502c7b.scope: Deactivated successfully.
Jan 31 05:37:42 np0005603787 podman[257800]: 2026-01-31 10:37:42.363770135 +0000 UTC m=+0.049027312 container create 77eb44afc2a3f9f181e72c90d832cb689cb47ff2daa898bd1941453048f64fe8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_murdock, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:37:42 np0005603787 systemd[1]: Started libpod-conmon-77eb44afc2a3f9f181e72c90d832cb689cb47ff2daa898bd1941453048f64fe8.scope.
Jan 31 05:37:42 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:37:42 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81a25806e85aa01c0369282f69580e1ecadefae4ec5bfb3f2ee0be6aa340b011/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:37:42 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81a25806e85aa01c0369282f69580e1ecadefae4ec5bfb3f2ee0be6aa340b011/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:37:42 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81a25806e85aa01c0369282f69580e1ecadefae4ec5bfb3f2ee0be6aa340b011/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:37:42 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81a25806e85aa01c0369282f69580e1ecadefae4ec5bfb3f2ee0be6aa340b011/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:37:42 np0005603787 podman[257800]: 2026-01-31 10:37:42.339316571 +0000 UTC m=+0.024573808 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:37:42 np0005603787 podman[257800]: 2026-01-31 10:37:42.45056412 +0000 UTC m=+0.135821327 container init 77eb44afc2a3f9f181e72c90d832cb689cb47ff2daa898bd1941453048f64fe8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_murdock, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 05:37:42 np0005603787 podman[257800]: 2026-01-31 10:37:42.463757129 +0000 UTC m=+0.149014266 container start 77eb44afc2a3f9f181e72c90d832cb689cb47ff2daa898bd1941453048f64fe8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_murdock, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 05:37:42 np0005603787 podman[257800]: 2026-01-31 10:37:42.467983963 +0000 UTC m=+0.153241180 container attach 77eb44afc2a3f9f181e72c90d832cb689cb47ff2daa898bd1941453048f64fe8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_murdock, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:37:42 np0005603787 brave_murdock[257816]: {
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:    "0": [
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:        {
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:            "devices": [
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "/dev/loop3"
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:            ],
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:            "lv_name": "ceph_lv0",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:            "lv_size": "21470642176",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:            "name": "ceph_lv0",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:            "tags": {
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "ceph.cluster_name": "ceph",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "ceph.crush_device_class": "",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "ceph.encrypted": "0",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "ceph.objectstore": "bluestore",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "ceph.osd_id": "0",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "ceph.type": "block",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "ceph.vdo": "0",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "ceph.with_tpm": "0"
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:            },
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:            "type": "block",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:            "vg_name": "ceph_vg0"
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:        }
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:    ],
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:    "1": [
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:        {
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:            "devices": [
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "/dev/loop4"
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:            ],
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:            "lv_name": "ceph_lv1",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:            "lv_size": "21470642176",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:            "name": "ceph_lv1",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:            "tags": {
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "ceph.cluster_name": "ceph",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "ceph.crush_device_class": "",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "ceph.encrypted": "0",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "ceph.objectstore": "bluestore",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "ceph.osd_id": "1",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "ceph.type": "block",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "ceph.vdo": "0",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "ceph.with_tpm": "0"
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:            },
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:            "type": "block",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:            "vg_name": "ceph_vg1"
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:        }
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:    ],
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:    "2": [
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:        {
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:            "devices": [
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "/dev/loop5"
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:            ],
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:            "lv_name": "ceph_lv2",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:            "lv_size": "21470642176",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:            "name": "ceph_lv2",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:            "tags": {
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "ceph.cluster_name": "ceph",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "ceph.crush_device_class": "",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "ceph.encrypted": "0",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "ceph.objectstore": "bluestore",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "ceph.osd_id": "2",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "ceph.type": "block",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "ceph.vdo": "0",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:                "ceph.with_tpm": "0"
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:            },
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:            "type": "block",
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:            "vg_name": "ceph_vg2"
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:        }
Jan 31 05:37:42 np0005603787 brave_murdock[257816]:    ]
Jan 31 05:37:42 np0005603787 brave_murdock[257816]: }
Jan 31 05:37:42 np0005603787 systemd[1]: libpod-77eb44afc2a3f9f181e72c90d832cb689cb47ff2daa898bd1941453048f64fe8.scope: Deactivated successfully.
Jan 31 05:37:42 np0005603787 podman[257800]: 2026-01-31 10:37:42.733359985 +0000 UTC m=+0.418617172 container died 77eb44afc2a3f9f181e72c90d832cb689cb47ff2daa898bd1941453048f64fe8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_murdock, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:37:42 np0005603787 systemd[1]: var-lib-containers-storage-overlay-81a25806e85aa01c0369282f69580e1ecadefae4ec5bfb3f2ee0be6aa340b011-merged.mount: Deactivated successfully.
Jan 31 05:37:42 np0005603787 podman[257800]: 2026-01-31 10:37:42.780638338 +0000 UTC m=+0.465895475 container remove 77eb44afc2a3f9f181e72c90d832cb689cb47ff2daa898bd1941453048f64fe8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 05:37:42 np0005603787 systemd[1]: libpod-conmon-77eb44afc2a3f9f181e72c90d832cb689cb47ff2daa898bd1941453048f64fe8.scope: Deactivated successfully.
Jan 31 05:37:43 np0005603787 podman[257900]: 2026-01-31 10:37:43.175345 +0000 UTC m=+0.037489128 container create 8110cf96a98602803833cdea4520de2be6bf91f0cfc2cfa18739b625a4b4fa95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_kepler, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:37:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_10:37:43
Jan 31 05:37:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:37:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] do_upmap
Jan 31 05:37:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'vms', 'backups', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'volumes']
Jan 31 05:37:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:37:43 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:37:43 np0005603787 systemd[1]: Started libpod-conmon-8110cf96a98602803833cdea4520de2be6bf91f0cfc2cfa18739b625a4b4fa95.scope.
Jan 31 05:37:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:37:43 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:37:43 np0005603787 podman[257900]: 2026-01-31 10:37:43.158262427 +0000 UTC m=+0.020406585 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:37:43 np0005603787 podman[257900]: 2026-01-31 10:37:43.257304624 +0000 UTC m=+0.119448772 container init 8110cf96a98602803833cdea4520de2be6bf91f0cfc2cfa18739b625a4b4fa95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_kepler, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:37:43 np0005603787 podman[257900]: 2026-01-31 10:37:43.264659145 +0000 UTC m=+0.126803283 container start 8110cf96a98602803833cdea4520de2be6bf91f0cfc2cfa18739b625a4b4fa95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_kepler, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:37:43 np0005603787 podman[257900]: 2026-01-31 10:37:43.268164729 +0000 UTC m=+0.130308877 container attach 8110cf96a98602803833cdea4520de2be6bf91f0cfc2cfa18739b625a4b4fa95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_kepler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:37:43 np0005603787 practical_kepler[257917]: 167 167
Jan 31 05:37:43 np0005603787 systemd[1]: libpod-8110cf96a98602803833cdea4520de2be6bf91f0cfc2cfa18739b625a4b4fa95.scope: Deactivated successfully.
Jan 31 05:37:43 np0005603787 podman[257900]: 2026-01-31 10:37:43.270654897 +0000 UTC m=+0.132799035 container died 8110cf96a98602803833cdea4520de2be6bf91f0cfc2cfa18739b625a4b4fa95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_kepler, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 05:37:43 np0005603787 systemd[1]: var-lib-containers-storage-overlay-23656f1b46d50a2ff97fae017d2a0b009957cce22cfee62f384693ffb24573a7-merged.mount: Deactivated successfully.
Jan 31 05:37:43 np0005603787 podman[257900]: 2026-01-31 10:37:43.315235507 +0000 UTC m=+0.177379635 container remove 8110cf96a98602803833cdea4520de2be6bf91f0cfc2cfa18739b625a4b4fa95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_kepler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 05:37:43 np0005603787 systemd[1]: libpod-conmon-8110cf96a98602803833cdea4520de2be6bf91f0cfc2cfa18739b625a4b4fa95.scope: Deactivated successfully.
Jan 31 05:37:43 np0005603787 podman[257941]: 2026-01-31 10:37:43.458399392 +0000 UTC m=+0.057530392 container create ab69748e8d7b766494f16f62f201bea7428be80552352e8633a20bae460b1ebf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_snyder, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 05:37:43 np0005603787 systemd[1]: Started libpod-conmon-ab69748e8d7b766494f16f62f201bea7428be80552352e8633a20bae460b1ebf.scope.
Jan 31 05:37:43 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:37:43 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfcee6ec0835d3b10739575fb823a69e92a49c03a2d9319b86dbc3de76d55dbb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:37:43 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfcee6ec0835d3b10739575fb823a69e92a49c03a2d9319b86dbc3de76d55dbb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:37:43 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfcee6ec0835d3b10739575fb823a69e92a49c03a2d9319b86dbc3de76d55dbb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:37:43 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfcee6ec0835d3b10739575fb823a69e92a49c03a2d9319b86dbc3de76d55dbb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:37:43 np0005603787 podman[257941]: 2026-01-31 10:37:43.526216572 +0000 UTC m=+0.125347562 container init ab69748e8d7b766494f16f62f201bea7428be80552352e8633a20bae460b1ebf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:37:43 np0005603787 podman[257941]: 2026-01-31 10:37:43.431634116 +0000 UTC m=+0.030765166 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:37:43 np0005603787 podman[257941]: 2026-01-31 10:37:43.537297493 +0000 UTC m=+0.136428453 container start ab69748e8d7b766494f16f62f201bea7428be80552352e8633a20bae460b1ebf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_snyder, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 05:37:43 np0005603787 podman[257941]: 2026-01-31 10:37:43.541919099 +0000 UTC m=+0.141050399 container attach ab69748e8d7b766494f16f62f201bea7428be80552352e8633a20bae460b1ebf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_snyder, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:37:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:37:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:37:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:37:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:37:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:37:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:37:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:37:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:37:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:37:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:37:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:37:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:37:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:37:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:37:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:37:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:37:44 np0005603787 lvm[258037]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:37:44 np0005603787 lvm[258036]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:37:44 np0005603787 lvm[258037]: VG ceph_vg1 finished
Jan 31 05:37:44 np0005603787 lvm[258036]: VG ceph_vg0 finished
Jan 31 05:37:44 np0005603787 lvm[258039]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:37:44 np0005603787 lvm[258039]: VG ceph_vg2 finished
Jan 31 05:37:44 np0005603787 unruffled_snyder[257958]: {}
Jan 31 05:37:44 np0005603787 systemd[1]: libpod-ab69748e8d7b766494f16f62f201bea7428be80552352e8633a20bae460b1ebf.scope: Deactivated successfully.
Jan 31 05:37:44 np0005603787 systemd[1]: libpod-ab69748e8d7b766494f16f62f201bea7428be80552352e8633a20bae460b1ebf.scope: Consumed 1.047s CPU time.
Jan 31 05:37:44 np0005603787 podman[257941]: 2026-01-31 10:37:44.297901556 +0000 UTC m=+0.897032516 container died ab69748e8d7b766494f16f62f201bea7428be80552352e8633a20bae460b1ebf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_snyder, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 05:37:44 np0005603787 systemd[1]: var-lib-containers-storage-overlay-bfcee6ec0835d3b10739575fb823a69e92a49c03a2d9319b86dbc3de76d55dbb-merged.mount: Deactivated successfully.
Jan 31 05:37:44 np0005603787 podman[257941]: 2026-01-31 10:37:44.614886618 +0000 UTC m=+1.214017578 container remove ab69748e8d7b766494f16f62f201bea7428be80552352e8633a20bae460b1ebf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_snyder, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 05:37:44 np0005603787 systemd[1]: libpod-conmon-ab69748e8d7b766494f16f62f201bea7428be80552352e8633a20bae460b1ebf.scope: Deactivated successfully.
Jan 31 05:37:44 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:37:44 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:37:44 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:37:44 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:37:45 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1329: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:37:45 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Jan 31 05:37:45 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:37:45 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:37:45 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Jan 31 05:37:45 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Jan 31 05:37:47 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1331: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s
Jan 31 05:37:48 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:37:49 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1332: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s rd, 2.0 MiB/s wr, 11 op/s
Jan 31 05:37:51 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1333: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 2.0 MiB/s wr, 11 op/s
Jan 31 05:37:53 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1334: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 2.0 MiB/s wr, 11 op/s
Jan 31 05:37:53 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:37:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:37:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:37:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:37:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:37:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:37:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:37:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:37:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:37:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:37:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:37:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00033321734276446326 of space, bias 1.0, pg target 0.09996520282933898 quantized to 32 (current 32)
Jan 31 05:37:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:37:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.529204377347451e-06 of space, bias 4.0, pg target 0.0018350452528169412 quantized to 16 (current 16)
Jan 31 05:37:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:37:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:37:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:37:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:37:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:37:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 05:37:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:37:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:37:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:37:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:37:55 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1335: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 2.0 MiB/s wr, 11 op/s
Jan 31 05:37:55 np0005603787 podman[258083]: 2026-01-31 10:37:55.871069762 +0000 UTC m=+0.074967217 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:37:55 np0005603787 podman[258082]: 2026-01-31 10:37:55.921899251 +0000 UTC m=+0.128697855 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 05:37:57 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 6.6 KiB/s rd, 1.8 MiB/s wr, 10 op/s
Jan 31 05:37:58 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:37:59 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1337: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 1.7 MiB/s wr, 9 op/s
Jan 31 05:38:01 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1338: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 31 05:38:03 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1339: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:38:03 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:38:05 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1340: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:38:07 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1341: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:38:08 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:38:09 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1342: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:38:11 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1343: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:38:13 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1344: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:38:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:38:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:38:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:38:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:38:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:38:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:38:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:38:14 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Jan 31 05:38:14 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:38:14.403912) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 05:38:14 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Jan 31 05:38:14 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769855894403948, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2060, "num_deletes": 251, "total_data_size": 3527518, "memory_usage": 3574064, "flush_reason": "Manual Compaction"}
Jan 31 05:38:14 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Jan 31 05:38:14 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769855894431410, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3460901, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25691, "largest_seqno": 27750, "table_properties": {"data_size": 3451391, "index_size": 6068, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18868, "raw_average_key_size": 20, "raw_value_size": 3432480, "raw_average_value_size": 3663, "num_data_blocks": 269, "num_entries": 937, "num_filter_entries": 937, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769855662, "oldest_key_time": 1769855662, "file_creation_time": 1769855894, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a9067dea-12fb-43c2-8d5c-dbf66227f0e8", "db_session_id": "EXKALWXQ4I64EWVUMKE5", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:38:14 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 27556 microseconds, and 5816 cpu microseconds.
Jan 31 05:38:14 np0005603787 ceph-mon[75160]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 05:38:14 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:38:14.431465) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3460901 bytes OK
Jan 31 05:38:14 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:38:14.431485) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Jan 31 05:38:14 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:38:14.433661) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Jan 31 05:38:14 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:38:14.433684) EVENT_LOG_v1 {"time_micros": 1769855894433678, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 05:38:14 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:38:14.433702) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 05:38:14 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3518883, prev total WAL file size 3518883, number of live WAL files 2.
Jan 31 05:38:14 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:38:14 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:38:14.434324) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Jan 31 05:38:14 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 05:38:14 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3379KB)], [59(7576KB)]
Jan 31 05:38:14 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769855894434441, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 11219142, "oldest_snapshot_seqno": -1}
Jan 31 05:38:14 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5146 keys, 9420502 bytes, temperature: kUnknown
Jan 31 05:38:14 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769855894484492, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 9420502, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9384109, "index_size": 22395, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12933, "raw_key_size": 127646, "raw_average_key_size": 24, "raw_value_size": 9289140, "raw_average_value_size": 1805, "num_data_blocks": 926, "num_entries": 5146, "num_filter_entries": 5146, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769853439, "oldest_key_time": 0, "file_creation_time": 1769855894, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a9067dea-12fb-43c2-8d5c-dbf66227f0e8", "db_session_id": "EXKALWXQ4I64EWVUMKE5", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:38:14 np0005603787 ceph-mon[75160]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 05:38:14 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:38:14.484726) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 9420502 bytes
Jan 31 05:38:14 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:38:14.486153) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 223.9 rd, 188.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.4 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(6.0) write-amplify(2.7) OK, records in: 5664, records dropped: 518 output_compression: NoCompression
Jan 31 05:38:14 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:38:14.486181) EVENT_LOG_v1 {"time_micros": 1769855894486168, "job": 32, "event": "compaction_finished", "compaction_time_micros": 50116, "compaction_time_cpu_micros": 20249, "output_level": 6, "num_output_files": 1, "total_output_size": 9420502, "num_input_records": 5664, "num_output_records": 5146, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 05:38:14 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:38:14 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769855894486731, "job": 32, "event": "table_file_deletion", "file_number": 61}
Jan 31 05:38:14 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:38:14 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769855894488008, "job": 32, "event": "table_file_deletion", "file_number": 59}
Jan 31 05:38:14 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:38:14.434199) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:38:14 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:38:14.488227) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:38:14 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:38:14.488236) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:38:14 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:38:14.488239) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:38:14 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:38:14.488242) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:38:14 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:38:14.488245) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:38:15 np0005603787 nova_compute[238603]: 2026-01-31 10:38:15.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:38:15 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1345: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:38:16 np0005603787 nova_compute[238603]: 2026-01-31 10:38:16.104 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:38:16 np0005603787 nova_compute[238603]: 2026-01-31 10:38:16.104 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:38:17 np0005603787 nova_compute[238603]: 2026-01-31 10:38:17.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:38:17 np0005603787 nova_compute[238603]: 2026-01-31 10:38:17.102 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 05:38:17 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1346: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:38:18 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:38:19 np0005603787 nova_compute[238603]: 2026-01-31 10:38:19.098 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:38:19 np0005603787 nova_compute[238603]: 2026-01-31 10:38:19.101 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:38:19 np0005603787 nova_compute[238603]: 2026-01-31 10:38:19.101 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 05:38:19 np0005603787 nova_compute[238603]: 2026-01-31 10:38:19.101 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 05:38:19 np0005603787 nova_compute[238603]: 2026-01-31 10:38:19.121 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 05:38:19 np0005603787 nova_compute[238603]: 2026-01-31 10:38:19.122 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:38:19 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:38:21 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1348: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:38:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 05:38:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/641822439' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 05:38:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 05:38:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/641822439' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 05:38:23 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1349: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:38:23 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:38:24 np0005603787 ceph-osd[85879]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 05:38:24 np0005603787 ceph-osd[85879]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 6662 writes, 26K keys, 6662 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6662 writes, 1393 syncs, 4.78 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 270 writes, 495 keys, 270 commit groups, 1.0 writes per commit group, ingest: 0.21 MB, 0.00 MB/s#012Interval WAL: 270 writes, 135 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 05:38:25 np0005603787 nova_compute[238603]: 2026-01-31 10:38:25.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:38:25 np0005603787 nova_compute[238603]: 2026-01-31 10:38:25.134 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:38:25 np0005603787 nova_compute[238603]: 2026-01-31 10:38:25.134 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:38:25 np0005603787 nova_compute[238603]: 2026-01-31 10:38:25.134 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:38:25 np0005603787 nova_compute[238603]: 2026-01-31 10:38:25.135 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 05:38:25 np0005603787 nova_compute[238603]: 2026-01-31 10:38:25.135 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:38:25 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:38:25 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:38:25 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/777392009' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:38:25 np0005603787 nova_compute[238603]: 2026-01-31 10:38:25.625 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:38:25 np0005603787 nova_compute[238603]: 2026-01-31 10:38:25.782 238607 WARNING nova.virt.libvirt.driver [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 05:38:25 np0005603787 nova_compute[238603]: 2026-01-31 10:38:25.783 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5114MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 05:38:25 np0005603787 nova_compute[238603]: 2026-01-31 10:38:25.784 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:38:25 np0005603787 nova_compute[238603]: 2026-01-31 10:38:25.784 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:38:25 np0005603787 nova_compute[238603]: 2026-01-31 10:38:25.866 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 05:38:25 np0005603787 nova_compute[238603]: 2026-01-31 10:38:25.866 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 05:38:25 np0005603787 nova_compute[238603]: 2026-01-31 10:38:25.890 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:38:26 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:38:26 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1638894228' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:38:26 np0005603787 nova_compute[238603]: 2026-01-31 10:38:26.433 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:38:26 np0005603787 nova_compute[238603]: 2026-01-31 10:38:26.438 238607 DEBUG nova.compute.provider_tree [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed in ProviderTree for provider: 207962d2-1ba9-4db2-8533-2a30e7131f3e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 05:38:26 np0005603787 nova_compute[238603]: 2026-01-31 10:38:26.469 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed for provider 207962d2-1ba9-4db2-8533-2a30e7131f3e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 05:38:26 np0005603787 nova_compute[238603]: 2026-01-31 10:38:26.471 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 05:38:26 np0005603787 nova_compute[238603]: 2026-01-31 10:38:26.471 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:38:26 np0005603787 podman[258173]: 2026-01-31 10:38:26.849171729 +0000 UTC m=+0.060653047 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 31 05:38:26 np0005603787 podman[258172]: 2026-01-31 10:38:26.871232661 +0000 UTC m=+0.086823391 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 05:38:27 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1351: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:38:28 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:38:29 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1352: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:38:29 np0005603787 nova_compute[238603]: 2026-01-31 10:38:29.472 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:38:30 np0005603787 ceph-osd[86934]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 05:38:30 np0005603787 ceph-osd[86934]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2402.2 total, 600.0 interval#012Cumulative writes: 7892 writes, 30K keys, 7892 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7892 writes, 1768 syncs, 4.46 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 228 writes, 359 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.14 MB, 0.00 MB/s#012Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 05:38:31 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:38:33 np0005603787 nova_compute[238603]: 2026-01-31 10:38:33.097 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:38:33 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:38:33 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1354: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:38:35 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1355: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:38:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:38:37.083 154765 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:38:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:38:37.084 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:38:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:38:37.084 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:38:37 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:38:37 np0005603787 ceph-osd[87996]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 05:38:37 np0005603787 ceph-osd[87996]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.3 total, 600.0 interval#012Cumulative writes: 6562 writes, 26K keys, 6562 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6562 writes, 1322 syncs, 4.96 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 254 writes, 461 keys, 254 commit groups, 1.0 writes per commit group, ingest: 0.17 MB, 0.00 MB/s#012Interval WAL: 254 writes, 127 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 05:38:38 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:38:39 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1357: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:38:41 np0005603787 ceph-mgr[75453]: [devicehealth INFO root] Check health
Jan 31 05:38:41 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1358: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:38:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_10:38:43
Jan 31 05:38:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:38:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] do_upmap
Jan 31 05:38:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] pools ['volumes', '.rgw.root', '.mgr', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'images', 'default.rgw.control', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta']
Jan 31 05:38:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:38:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:38:43 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1359: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:38:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:38:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:38:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:38:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:38:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:38:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:38:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:38:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:38:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:38:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:38:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:38:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:38:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:38:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:38:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:38:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:38:45 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:38:45 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:38:45 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:38:45 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:38:45 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:38:45 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:38:45 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:38:45 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:38:45 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:38:45 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:38:45 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:38:45 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:38:45 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:38:45 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:38:45 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:38:45 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:38:45 np0005603787 podman[258359]: 2026-01-31 10:38:45.679125224 +0000 UTC m=+0.043583451 container create 1853aa54f7c8d2ba1e980eceb84f999199747c03948ee237c5ff3e88bb0046a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_elbakyan, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 05:38:45 np0005603787 systemd[1]: Started libpod-conmon-1853aa54f7c8d2ba1e980eceb84f999199747c03948ee237c5ff3e88bb0046a5.scope.
Jan 31 05:38:45 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:38:45 np0005603787 podman[258359]: 2026-01-31 10:38:45.744609291 +0000 UTC m=+0.109067538 container init 1853aa54f7c8d2ba1e980eceb84f999199747c03948ee237c5ff3e88bb0046a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_elbakyan, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 05:38:45 np0005603787 podman[258359]: 2026-01-31 10:38:45.750137242 +0000 UTC m=+0.114595469 container start 1853aa54f7c8d2ba1e980eceb84f999199747c03948ee237c5ff3e88bb0046a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 05:38:45 np0005603787 podman[258359]: 2026-01-31 10:38:45.657707089 +0000 UTC m=+0.022165366 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:38:45 np0005603787 podman[258359]: 2026-01-31 10:38:45.75368644 +0000 UTC m=+0.118144677 container attach 1853aa54f7c8d2ba1e980eceb84f999199747c03948ee237c5ff3e88bb0046a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_elbakyan, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:38:45 np0005603787 elastic_elbakyan[258375]: 167 167
Jan 31 05:38:45 np0005603787 systemd[1]: libpod-1853aa54f7c8d2ba1e980eceb84f999199747c03948ee237c5ff3e88bb0046a5.scope: Deactivated successfully.
Jan 31 05:38:45 np0005603787 conmon[258375]: conmon 1853aa54f7c8d2ba1e98 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1853aa54f7c8d2ba1e980eceb84f999199747c03948ee237c5ff3e88bb0046a5.scope/container/memory.events
Jan 31 05:38:45 np0005603787 podman[258359]: 2026-01-31 10:38:45.757052421 +0000 UTC m=+0.121510658 container died 1853aa54f7c8d2ba1e980eceb84f999199747c03948ee237c5ff3e88bb0046a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_elbakyan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 05:38:45 np0005603787 systemd[1]: var-lib-containers-storage-overlay-cc0583c4cb005fabbc375168a85d1103ad0ff9d6a97a46739867f6f22f922964-merged.mount: Deactivated successfully.
Jan 31 05:38:45 np0005603787 podman[258359]: 2026-01-31 10:38:45.792052917 +0000 UTC m=+0.156511184 container remove 1853aa54f7c8d2ba1e980eceb84f999199747c03948ee237c5ff3e88bb0046a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_elbakyan, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:38:45 np0005603787 systemd[1]: libpod-conmon-1853aa54f7c8d2ba1e980eceb84f999199747c03948ee237c5ff3e88bb0046a5.scope: Deactivated successfully.
Jan 31 05:38:45 np0005603787 podman[258399]: 2026-01-31 10:38:45.937966702 +0000 UTC m=+0.054332465 container create 6ec32f05cca0aaf50d5acc6cb5431ccf2a0af485f5c89c2091a2c9c80460096b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 05:38:45 np0005603787 systemd[1]: Started libpod-conmon-6ec32f05cca0aaf50d5acc6cb5431ccf2a0af485f5c89c2091a2c9c80460096b.scope.
Jan 31 05:38:46 np0005603787 podman[258399]: 2026-01-31 10:38:45.911973921 +0000 UTC m=+0.028339704 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:38:46 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:38:46 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e312f322dca405fb20b065bf515ad8212328f41d17a1962fdaf8b5c8815661f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:38:46 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e312f322dca405fb20b065bf515ad8212328f41d17a1962fdaf8b5c8815661f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:38:46 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e312f322dca405fb20b065bf515ad8212328f41d17a1962fdaf8b5c8815661f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:38:46 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e312f322dca405fb20b065bf515ad8212328f41d17a1962fdaf8b5c8815661f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:38:46 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e312f322dca405fb20b065bf515ad8212328f41d17a1962fdaf8b5c8815661f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:38:46 np0005603787 podman[258399]: 2026-01-31 10:38:46.022655633 +0000 UTC m=+0.139021366 container init 6ec32f05cca0aaf50d5acc6cb5431ccf2a0af485f5c89c2091a2c9c80460096b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 31 05:38:46 np0005603787 podman[258399]: 2026-01-31 10:38:46.029596243 +0000 UTC m=+0.145961966 container start 6ec32f05cca0aaf50d5acc6cb5431ccf2a0af485f5c89c2091a2c9c80460096b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_morse, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:38:46 np0005603787 podman[258399]: 2026-01-31 10:38:46.032935344 +0000 UTC m=+0.149301067 container attach 6ec32f05cca0aaf50d5acc6cb5431ccf2a0af485f5c89c2091a2c9c80460096b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:38:46 np0005603787 jolly_morse[258416]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:38:46 np0005603787 jolly_morse[258416]: --> All data devices are unavailable
Jan 31 05:38:46 np0005603787 systemd[1]: libpod-6ec32f05cca0aaf50d5acc6cb5431ccf2a0af485f5c89c2091a2c9c80460096b.scope: Deactivated successfully.
Jan 31 05:38:46 np0005603787 podman[258399]: 2026-01-31 10:38:46.434532059 +0000 UTC m=+0.550897812 container died 6ec32f05cca0aaf50d5acc6cb5431ccf2a0af485f5c89c2091a2c9c80460096b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:38:46 np0005603787 systemd[1]: var-lib-containers-storage-overlay-4e312f322dca405fb20b065bf515ad8212328f41d17a1962fdaf8b5c8815661f-merged.mount: Deactivated successfully.
Jan 31 05:38:46 np0005603787 podman[258399]: 2026-01-31 10:38:46.479490926 +0000 UTC m=+0.595856649 container remove 6ec32f05cca0aaf50d5acc6cb5431ccf2a0af485f5c89c2091a2c9c80460096b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:38:46 np0005603787 systemd[1]: libpod-conmon-6ec32f05cca0aaf50d5acc6cb5431ccf2a0af485f5c89c2091a2c9c80460096b.scope: Deactivated successfully.
Jan 31 05:38:46 np0005603787 podman[258510]: 2026-01-31 10:38:46.936117943 +0000 UTC m=+0.041344979 container create d2beb783279f374f1ad071b87db2282b567289e3a179d355f80d78744c1b54be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_easley, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 05:38:46 np0005603787 systemd[1]: Started libpod-conmon-d2beb783279f374f1ad071b87db2282b567289e3a179d355f80d78744c1b54be.scope.
Jan 31 05:38:47 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:38:47 np0005603787 podman[258510]: 2026-01-31 10:38:46.924418524 +0000 UTC m=+0.029645580 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:38:47 np0005603787 podman[258510]: 2026-01-31 10:38:47.022476971 +0000 UTC m=+0.127704037 container init d2beb783279f374f1ad071b87db2282b567289e3a179d355f80d78744c1b54be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_easley, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 05:38:47 np0005603787 podman[258510]: 2026-01-31 10:38:47.027791647 +0000 UTC m=+0.133018683 container start d2beb783279f374f1ad071b87db2282b567289e3a179d355f80d78744c1b54be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_easley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:38:47 np0005603787 podman[258510]: 2026-01-31 10:38:47.030499941 +0000 UTC m=+0.135726977 container attach d2beb783279f374f1ad071b87db2282b567289e3a179d355f80d78744c1b54be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 05:38:47 np0005603787 quizzical_easley[258526]: 167 167
Jan 31 05:38:47 np0005603787 systemd[1]: libpod-d2beb783279f374f1ad071b87db2282b567289e3a179d355f80d78744c1b54be.scope: Deactivated successfully.
Jan 31 05:38:47 np0005603787 podman[258510]: 2026-01-31 10:38:47.032233388 +0000 UTC m=+0.137460414 container died d2beb783279f374f1ad071b87db2282b567289e3a179d355f80d78744c1b54be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_easley, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:38:47 np0005603787 systemd[1]: var-lib-containers-storage-overlay-427ae5a62188e49cc4b9eba9536c1b6003f8a854e8ed88f4107f5f18af209d10-merged.mount: Deactivated successfully.
Jan 31 05:38:47 np0005603787 podman[258510]: 2026-01-31 10:38:47.073240247 +0000 UTC m=+0.178467313 container remove d2beb783279f374f1ad071b87db2282b567289e3a179d355f80d78744c1b54be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 05:38:47 np0005603787 systemd[1]: libpod-conmon-d2beb783279f374f1ad071b87db2282b567289e3a179d355f80d78744c1b54be.scope: Deactivated successfully.
Jan 31 05:38:47 np0005603787 podman[258550]: 2026-01-31 10:38:47.22348953 +0000 UTC m=+0.046678186 container create 93e8ed36b50f24ea96a44ca432082072dc8041b40a6fe16b6327ebe2d8eb8144 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_bouman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 05:38:47 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:38:47 np0005603787 systemd[1]: Started libpod-conmon-93e8ed36b50f24ea96a44ca432082072dc8041b40a6fe16b6327ebe2d8eb8144.scope.
Jan 31 05:38:47 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:38:47 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05d89cfbdb3e8eab45c5fd53450cf6ee0422c82df4c937caf30e2f87ecf5d1a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:38:47 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05d89cfbdb3e8eab45c5fd53450cf6ee0422c82df4c937caf30e2f87ecf5d1a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:38:47 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05d89cfbdb3e8eab45c5fd53450cf6ee0422c82df4c937caf30e2f87ecf5d1a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:38:47 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05d89cfbdb3e8eab45c5fd53450cf6ee0422c82df4c937caf30e2f87ecf5d1a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:38:47 np0005603787 podman[258550]: 2026-01-31 10:38:47.203401951 +0000 UTC m=+0.026590677 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:38:47 np0005603787 podman[258550]: 2026-01-31 10:38:47.309171269 +0000 UTC m=+0.132359935 container init 93e8ed36b50f24ea96a44ca432082072dc8041b40a6fe16b6327ebe2d8eb8144 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_bouman, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:38:47 np0005603787 podman[258550]: 2026-01-31 10:38:47.31652233 +0000 UTC m=+0.139710976 container start 93e8ed36b50f24ea96a44ca432082072dc8041b40a6fe16b6327ebe2d8eb8144 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:38:47 np0005603787 podman[258550]: 2026-01-31 10:38:47.320255872 +0000 UTC m=+0.143444538 container attach 93e8ed36b50f24ea96a44ca432082072dc8041b40a6fe16b6327ebe2d8eb8144 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 05:38:47 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Jan 31 05:38:47 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Jan 31 05:38:47 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]: {
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:    "0": [
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:        {
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:            "devices": [
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "/dev/loop3"
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:            ],
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:            "lv_name": "ceph_lv0",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:            "lv_size": "21470642176",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:            "name": "ceph_lv0",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:            "tags": {
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "ceph.cluster_name": "ceph",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "ceph.crush_device_class": "",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "ceph.encrypted": "0",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "ceph.objectstore": "bluestore",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "ceph.osd_id": "0",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "ceph.type": "block",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "ceph.vdo": "0",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "ceph.with_tpm": "0"
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:            },
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:            "type": "block",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:            "vg_name": "ceph_vg0"
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:        }
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:    ],
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:    "1": [
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:        {
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:            "devices": [
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "/dev/loop4"
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:            ],
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:            "lv_name": "ceph_lv1",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:            "lv_size": "21470642176",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:            "name": "ceph_lv1",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:            "tags": {
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "ceph.cluster_name": "ceph",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "ceph.crush_device_class": "",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "ceph.encrypted": "0",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "ceph.objectstore": "bluestore",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "ceph.osd_id": "1",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "ceph.type": "block",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "ceph.vdo": "0",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "ceph.with_tpm": "0"
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:            },
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:            "type": "block",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:            "vg_name": "ceph_vg1"
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:        }
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:    ],
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:    "2": [
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:        {
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:            "devices": [
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "/dev/loop5"
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:            ],
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:            "lv_name": "ceph_lv2",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:            "lv_size": "21470642176",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:            "name": "ceph_lv2",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:            "tags": {
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "ceph.cluster_name": "ceph",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "ceph.crush_device_class": "",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "ceph.encrypted": "0",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "ceph.objectstore": "bluestore",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "ceph.osd_id": "2",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "ceph.type": "block",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "ceph.vdo": "0",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:                "ceph.with_tpm": "0"
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:            },
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:            "type": "block",
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:            "vg_name": "ceph_vg2"
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:        }
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]:    ]
Jan 31 05:38:47 np0005603787 elastic_bouman[258567]: }
Jan 31 05:38:47 np0005603787 systemd[1]: libpod-93e8ed36b50f24ea96a44ca432082072dc8041b40a6fe16b6327ebe2d8eb8144.scope: Deactivated successfully.
Jan 31 05:38:47 np0005603787 podman[258550]: 2026-01-31 10:38:47.623043269 +0000 UTC m=+0.446231915 container died 93e8ed36b50f24ea96a44ca432082072dc8041b40a6fe16b6327ebe2d8eb8144 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_bouman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 05:38:47 np0005603787 systemd[1]: var-lib-containers-storage-overlay-05d89cfbdb3e8eab45c5fd53450cf6ee0422c82df4c937caf30e2f87ecf5d1a4-merged.mount: Deactivated successfully.
Jan 31 05:38:47 np0005603787 podman[258550]: 2026-01-31 10:38:47.679721667 +0000 UTC m=+0.502910313 container remove 93e8ed36b50f24ea96a44ca432082072dc8041b40a6fe16b6327ebe2d8eb8144 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:38:47 np0005603787 systemd[1]: libpod-conmon-93e8ed36b50f24ea96a44ca432082072dc8041b40a6fe16b6327ebe2d8eb8144.scope: Deactivated successfully.
Jan 31 05:38:48 np0005603787 podman[258651]: 2026-01-31 10:38:48.10287678 +0000 UTC m=+0.038616666 container create bf8abdd29a95cc411af33eb9ebc1636559e39cd49dbec87b7a0b6dc3b9b1d152 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_colden, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:38:48 np0005603787 systemd[1]: Started libpod-conmon-bf8abdd29a95cc411af33eb9ebc1636559e39cd49dbec87b7a0b6dc3b9b1d152.scope.
Jan 31 05:38:48 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:38:48 np0005603787 podman[258651]: 2026-01-31 10:38:48.175260416 +0000 UTC m=+0.111000292 container init bf8abdd29a95cc411af33eb9ebc1636559e39cd49dbec87b7a0b6dc3b9b1d152 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_colden, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:38:48 np0005603787 podman[258651]: 2026-01-31 10:38:48.083899821 +0000 UTC m=+0.019639707 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:38:48 np0005603787 podman[258651]: 2026-01-31 10:38:48.180763096 +0000 UTC m=+0.116502952 container start bf8abdd29a95cc411af33eb9ebc1636559e39cd49dbec87b7a0b6dc3b9b1d152 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_colden, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:38:48 np0005603787 podman[258651]: 2026-01-31 10:38:48.183934663 +0000 UTC m=+0.119674549 container attach bf8abdd29a95cc411af33eb9ebc1636559e39cd49dbec87b7a0b6dc3b9b1d152 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_colden, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 05:38:48 np0005603787 gracious_colden[258668]: 167 167
Jan 31 05:38:48 np0005603787 systemd[1]: libpod-bf8abdd29a95cc411af33eb9ebc1636559e39cd49dbec87b7a0b6dc3b9b1d152.scope: Deactivated successfully.
Jan 31 05:38:48 np0005603787 podman[258651]: 2026-01-31 10:38:48.187198112 +0000 UTC m=+0.122937978 container died bf8abdd29a95cc411af33eb9ebc1636559e39cd49dbec87b7a0b6dc3b9b1d152 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 05:38:48 np0005603787 systemd[1]: var-lib-containers-storage-overlay-394d909e9e6e750723882a9316f9c2e3f704c5954d578248dc605d29c2fc237c-merged.mount: Deactivated successfully.
Jan 31 05:38:48 np0005603787 podman[258651]: 2026-01-31 10:38:48.220796609 +0000 UTC m=+0.156536475 container remove bf8abdd29a95cc411af33eb9ebc1636559e39cd49dbec87b7a0b6dc3b9b1d152 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:38:48 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:38:48 np0005603787 systemd[1]: libpod-conmon-bf8abdd29a95cc411af33eb9ebc1636559e39cd49dbec87b7a0b6dc3b9b1d152.scope: Deactivated successfully.
Jan 31 05:38:48 np0005603787 podman[258691]: 2026-01-31 10:38:48.357494712 +0000 UTC m=+0.039771558 container create 2de982a207a6f3419ec8c44a4157b45bf7cc8723abcb3f81af9af836200bdf62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_lehmann, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:38:48 np0005603787 systemd[1]: Started libpod-conmon-2de982a207a6f3419ec8c44a4157b45bf7cc8723abcb3f81af9af836200bdf62.scope.
Jan 31 05:38:48 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:38:48 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ec62ae6a576789d8900d3a087ca1c9ddada6ae6771ddfc40827ccc93d934821/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:38:48 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ec62ae6a576789d8900d3a087ca1c9ddada6ae6771ddfc40827ccc93d934821/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:38:48 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ec62ae6a576789d8900d3a087ca1c9ddada6ae6771ddfc40827ccc93d934821/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:38:48 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ec62ae6a576789d8900d3a087ca1c9ddada6ae6771ddfc40827ccc93d934821/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:38:48 np0005603787 podman[258691]: 2026-01-31 10:38:48.336296972 +0000 UTC m=+0.018573868 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:38:48 np0005603787 podman[258691]: 2026-01-31 10:38:48.443072428 +0000 UTC m=+0.125349274 container init 2de982a207a6f3419ec8c44a4157b45bf7cc8723abcb3f81af9af836200bdf62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:38:48 np0005603787 podman[258691]: 2026-01-31 10:38:48.451883799 +0000 UTC m=+0.134160645 container start 2de982a207a6f3419ec8c44a4157b45bf7cc8723abcb3f81af9af836200bdf62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_lehmann, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:38:48 np0005603787 podman[258691]: 2026-01-31 10:38:48.454809208 +0000 UTC m=+0.137086074 container attach 2de982a207a6f3419ec8c44a4157b45bf7cc8723abcb3f81af9af836200bdf62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_lehmann, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:38:49 np0005603787 lvm[258783]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:38:49 np0005603787 lvm[258783]: VG ceph_vg0 finished
Jan 31 05:38:49 np0005603787 lvm[258786]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:38:49 np0005603787 lvm[258786]: VG ceph_vg1 finished
Jan 31 05:38:49 np0005603787 lvm[258788]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:38:49 np0005603787 lvm[258788]: VG ceph_vg2 finished
Jan 31 05:38:49 np0005603787 funny_lehmann[258707]: {}
Jan 31 05:38:49 np0005603787 systemd[1]: libpod-2de982a207a6f3419ec8c44a4157b45bf7cc8723abcb3f81af9af836200bdf62.scope: Deactivated successfully.
Jan 31 05:38:49 np0005603787 podman[258691]: 2026-01-31 10:38:49.165151773 +0000 UTC m=+0.847428619 container died 2de982a207a6f3419ec8c44a4157b45bf7cc8723abcb3f81af9af836200bdf62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_lehmann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 05:38:49 np0005603787 systemd[1]: libpod-2de982a207a6f3419ec8c44a4157b45bf7cc8723abcb3f81af9af836200bdf62.scope: Consumed 1.035s CPU time.
Jan 31 05:38:49 np0005603787 systemd[1]: var-lib-containers-storage-overlay-9ec62ae6a576789d8900d3a087ca1c9ddada6ae6771ddfc40827ccc93d934821-merged.mount: Deactivated successfully.
Jan 31 05:38:49 np0005603787 podman[258691]: 2026-01-31 10:38:49.203679005 +0000 UTC m=+0.885955851 container remove 2de982a207a6f3419ec8c44a4157b45bf7cc8723abcb3f81af9af836200bdf62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_lehmann, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:38:49 np0005603787 systemd[1]: libpod-conmon-2de982a207a6f3419ec8c44a4157b45bf7cc8723abcb3f81af9af836200bdf62.scope: Deactivated successfully.
Jan 31 05:38:49 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 305 active+clean; 461 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Jan 31 05:38:49 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:38:49 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:38:49 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:38:49 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:38:50 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:38:50 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:38:51 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1364: 305 pgs: 305 active+clean; 461 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Jan 31 05:38:53 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:38:53 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Jan 31 05:38:53 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Jan 31 05:38:53 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1365: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 31 05:38:53 np0005603787 ceph-mon[75160]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Jan 31 05:38:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:38:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:38:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:38:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:38:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:38:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:38:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:38:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:38:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:38:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:38:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.859659023922061e-07 of space, bias 1.0, pg target 0.00014578977071766182 quantized to 32 (current 32)
Jan 31 05:38:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:38:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4926273002905205e-06 of space, bias 4.0, pg target 0.0017911527603486246 quantized to 16 (current 16)
Jan 31 05:38:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:38:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:38:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:38:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:38:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:38:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 05:38:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:38:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:38:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:38:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:38:55 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1367: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Jan 31 05:38:57 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 31 05:38:57 np0005603787 podman[258829]: 2026-01-31 10:38:57.8368985 +0000 UTC m=+0.052852924 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 31 05:38:57 np0005603787 podman[258828]: 2026-01-31 10:38:57.868032239 +0000 UTC m=+0.081081974 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127)
Jan 31 05:38:58 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:38:59 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1369: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 0 B/s wr, 1 op/s
Jan 31 05:39:01 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 0 B/s wr, 0 op/s
Jan 31 05:39:03 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:39:03 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:39:05 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:39:07 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1373: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:39:08 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:39:09 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1374: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:39:11 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1375: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:39:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:39:13 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1376: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:39:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:39:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:39:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:39:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:39:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:39:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:39:15 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1377: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:39:16 np0005603787 nova_compute[238603]: 2026-01-31 10:39:16.101 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:39:16 np0005603787 nova_compute[238603]: 2026-01-31 10:39:16.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:39:17 np0005603787 nova_compute[238603]: 2026-01-31 10:39:17.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:39:17 np0005603787 nova_compute[238603]: 2026-01-31 10:39:17.103 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 05:39:17 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1378: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:39:18 np0005603787 nova_compute[238603]: 2026-01-31 10:39:18.103 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:39:18 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:39:19 np0005603787 nova_compute[238603]: 2026-01-31 10:39:19.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:39:19 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1379: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:39:20 np0005603787 nova_compute[238603]: 2026-01-31 10:39:20.109 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:39:20 np0005603787 nova_compute[238603]: 2026-01-31 10:39:20.110 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:39:21 np0005603787 nova_compute[238603]: 2026-01-31 10:39:21.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:39:21 np0005603787 nova_compute[238603]: 2026-01-31 10:39:21.102 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 05:39:21 np0005603787 nova_compute[238603]: 2026-01-31 10:39:21.102 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 05:39:21 np0005603787 nova_compute[238603]: 2026-01-31 10:39:21.124 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 05:39:21 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:39:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 05:39:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4221405674' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 05:39:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 05:39:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4221405674' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 05:39:23 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:39:23 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:39:25 np0005603787 nova_compute[238603]: 2026-01-31 10:39:25.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:39:25 np0005603787 nova_compute[238603]: 2026-01-31 10:39:25.130 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:39:25 np0005603787 nova_compute[238603]: 2026-01-31 10:39:25.131 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:39:25 np0005603787 nova_compute[238603]: 2026-01-31 10:39:25.131 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:39:25 np0005603787 nova_compute[238603]: 2026-01-31 10:39:25.131 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 05:39:25 np0005603787 nova_compute[238603]: 2026-01-31 10:39:25.131 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:39:25 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:39:25 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:39:25 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/878610769' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:39:25 np0005603787 nova_compute[238603]: 2026-01-31 10:39:25.656 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:39:25 np0005603787 nova_compute[238603]: 2026-01-31 10:39:25.774 238607 WARNING nova.virt.libvirt.driver [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 05:39:25 np0005603787 nova_compute[238603]: 2026-01-31 10:39:25.775 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5129MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 05:39:25 np0005603787 nova_compute[238603]: 2026-01-31 10:39:25.775 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:39:25 np0005603787 nova_compute[238603]: 2026-01-31 10:39:25.776 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:39:26 np0005603787 nova_compute[238603]: 2026-01-31 10:39:26.133 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 05:39:26 np0005603787 nova_compute[238603]: 2026-01-31 10:39:26.133 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 05:39:26 np0005603787 nova_compute[238603]: 2026-01-31 10:39:26.199 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Refreshing inventories for resource provider 207962d2-1ba9-4db2-8533-2a30e7131f3e _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 05:39:26 np0005603787 nova_compute[238603]: 2026-01-31 10:39:26.301 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Updating ProviderTree inventory for provider 207962d2-1ba9-4db2-8533-2a30e7131f3e from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 05:39:26 np0005603787 nova_compute[238603]: 2026-01-31 10:39:26.302 238607 DEBUG nova.compute.provider_tree [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Updating inventory in ProviderTree for provider 207962d2-1ba9-4db2-8533-2a30e7131f3e with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 05:39:26 np0005603787 nova_compute[238603]: 2026-01-31 10:39:26.318 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Refreshing aggregate associations for resource provider 207962d2-1ba9-4db2-8533-2a30e7131f3e, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 05:39:26 np0005603787 nova_compute[238603]: 2026-01-31 10:39:26.343 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Refreshing trait associations for resource provider 207962d2-1ba9-4db2-8533-2a30e7131f3e, traits: COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SVM,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AESNI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_FMA3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE41,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_RESCUE_BFV,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_MMX,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NODE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_SHA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 05:39:26 np0005603787 nova_compute[238603]: 2026-01-31 10:39:26.363 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:39:26 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:39:26 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/653664740' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:39:26 np0005603787 nova_compute[238603]: 2026-01-31 10:39:26.829 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:39:26 np0005603787 nova_compute[238603]: 2026-01-31 10:39:26.836 238607 DEBUG nova.compute.provider_tree [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed in ProviderTree for provider: 207962d2-1ba9-4db2-8533-2a30e7131f3e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 05:39:26 np0005603787 nova_compute[238603]: 2026-01-31 10:39:26.928 238607 DEBUG nova.scheduler.client.report [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Inventory has not changed for provider 207962d2-1ba9-4db2-8533-2a30e7131f3e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 05:39:26 np0005603787 nova_compute[238603]: 2026-01-31 10:39:26.930 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 05:39:26 np0005603787 nova_compute[238603]: 2026-01-31 10:39:26.930 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.154s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:39:26 np0005603787 nova_compute[238603]: 2026-01-31 10:39:26.930 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:39:26 np0005603787 nova_compute[238603]: 2026-01-31 10:39:26.931 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 05:39:26 np0005603787 nova_compute[238603]: 2026-01-31 10:39:26.954 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 05:39:27 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1383: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:39:28 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:39:28 np0005603787 podman[258920]: 2026-01-31 10:39:28.827736175 +0000 UTC m=+0.048412313 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 05:39:28 np0005603787 podman[258919]: 2026-01-31 10:39:28.848883812 +0000 UTC m=+0.074060693 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:39:29 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1384: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:39:29 np0005603787 nova_compute[238603]: 2026-01-31 10:39:29.954 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:39:31 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 5 op/s
Jan 31 05:39:33 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:39:33 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1386: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 27 op/s
Jan 31 05:39:35 np0005603787 nova_compute[238603]: 2026-01-31 10:39:35.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:39:35 np0005603787 nova_compute[238603]: 2026-01-31 10:39:35.102 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 05:39:35 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 46 op/s
Jan 31 05:39:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:39:37.084 154765 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:39:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:39:37.085 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:39:37 np0005603787 ovn_metadata_agent[154760]: 2026-01-31 10:39:37.085 154765 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:39:37 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1388: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 05:39:38 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:39:39 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1389: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 05:39:41 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 05:39:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Optimize plan auto_2026-01-31_10:39:43
Jan 31 05:39:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:39:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] do_upmap
Jan 31 05:39:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', '.rgw.root', 'backups', 'default.rgw.control', 'default.rgw.meta', '.mgr', 'images', 'cephfs.cephfs.meta']
Jan 31 05:39:43 np0005603787 ceph-mgr[75453]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:39:43 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:39:43 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1391: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 54 op/s
Jan 31 05:39:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:39:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:39:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:39:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:39:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:39:43 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:39:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:39:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:39:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:39:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:39:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:39:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:39:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:39:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:39:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:39:44 np0005603787 ceph-mgr[75453]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:39:45 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 32 op/s
Jan 31 05:39:47 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 13 op/s
Jan 31 05:39:48 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:39:49 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:39:49 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:39:49 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:39:49 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:39:49 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:39:49 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:39:49 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:39:49 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:39:49 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:39:49 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:39:49 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:39:49 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:39:49 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:39:50 np0005603787 podman[259108]: 2026-01-31 10:39:50.270680175 +0000 UTC m=+0.018231439 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:39:50 np0005603787 podman[259108]: 2026-01-31 10:39:50.437343375 +0000 UTC m=+0.184894629 container create 05cf8e002fdae028da72fc36059f82746e6d98994784907bd675352ef41d5442 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Jan 31 05:39:50 np0005603787 systemd[1]: Started libpod-conmon-05cf8e002fdae028da72fc36059f82746e6d98994784907bd675352ef41d5442.scope.
Jan 31 05:39:50 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:39:50 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:39:50 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:39:50 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:39:50 np0005603787 podman[259108]: 2026-01-31 10:39:50.701140368 +0000 UTC m=+0.448691642 container init 05cf8e002fdae028da72fc36059f82746e6d98994784907bd675352ef41d5442 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_volhard, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:39:50 np0005603787 podman[259108]: 2026-01-31 10:39:50.710161944 +0000 UTC m=+0.457713238 container start 05cf8e002fdae028da72fc36059f82746e6d98994784907bd675352ef41d5442 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_volhard, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:39:50 np0005603787 serene_volhard[259124]: 167 167
Jan 31 05:39:50 np0005603787 systemd[1]: libpod-05cf8e002fdae028da72fc36059f82746e6d98994784907bd675352ef41d5442.scope: Deactivated successfully.
Jan 31 05:39:50 np0005603787 podman[259108]: 2026-01-31 10:39:50.718978755 +0000 UTC m=+0.466530019 container attach 05cf8e002fdae028da72fc36059f82746e6d98994784907bd675352ef41d5442 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 05:39:50 np0005603787 podman[259108]: 2026-01-31 10:39:50.719310124 +0000 UTC m=+0.466861378 container died 05cf8e002fdae028da72fc36059f82746e6d98994784907bd675352ef41d5442 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_volhard, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:39:50 np0005603787 systemd[1]: var-lib-containers-storage-overlay-ab565a61d52f857247131473d79e3f8bc5abc56d54e2ce247a0c1b90b78c22b5-merged.mount: Deactivated successfully.
Jan 31 05:39:50 np0005603787 podman[259108]: 2026-01-31 10:39:50.794443825 +0000 UTC m=+0.541995089 container remove 05cf8e002fdae028da72fc36059f82746e6d98994784907bd675352ef41d5442 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_volhard, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:39:50 np0005603787 systemd[1]: libpod-conmon-05cf8e002fdae028da72fc36059f82746e6d98994784907bd675352ef41d5442.scope: Deactivated successfully.
Jan 31 05:39:50 np0005603787 podman[259148]: 2026-01-31 10:39:50.954607068 +0000 UTC m=+0.053136212 container create 892ed934a22f9ff581106773ef0d72e1c1af901bc75afedc9548e085157552af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_torvalds, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 05:39:50 np0005603787 systemd[1]: Started libpod-conmon-892ed934a22f9ff581106773ef0d72e1c1af901bc75afedc9548e085157552af.scope.
Jan 31 05:39:51 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:39:51 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d47be29f50484ddf8ecb9bdff6550dc92b750422b4059c02a99ddfb3ac6590af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:39:51 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d47be29f50484ddf8ecb9bdff6550dc92b750422b4059c02a99ddfb3ac6590af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:39:51 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d47be29f50484ddf8ecb9bdff6550dc92b750422b4059c02a99ddfb3ac6590af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:39:51 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d47be29f50484ddf8ecb9bdff6550dc92b750422b4059c02a99ddfb3ac6590af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:39:51 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d47be29f50484ddf8ecb9bdff6550dc92b750422b4059c02a99ddfb3ac6590af/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:39:51 np0005603787 podman[259148]: 2026-01-31 10:39:50.93382047 +0000 UTC m=+0.032349634 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:39:51 np0005603787 podman[259148]: 2026-01-31 10:39:51.043557097 +0000 UTC m=+0.142086241 container init 892ed934a22f9ff581106773ef0d72e1c1af901bc75afedc9548e085157552af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 05:39:51 np0005603787 podman[259148]: 2026-01-31 10:39:51.050301161 +0000 UTC m=+0.148830305 container start 892ed934a22f9ff581106773ef0d72e1c1af901bc75afedc9548e085157552af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:39:51 np0005603787 podman[259148]: 2026-01-31 10:39:51.056242383 +0000 UTC m=+0.154771567 container attach 892ed934a22f9ff581106773ef0d72e1c1af901bc75afedc9548e085157552af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_torvalds, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:39:51 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:39:51 np0005603787 wonderful_torvalds[259165]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:39:51 np0005603787 wonderful_torvalds[259165]: --> All data devices are unavailable
Jan 31 05:39:51 np0005603787 systemd[1]: libpod-892ed934a22f9ff581106773ef0d72e1c1af901bc75afedc9548e085157552af.scope: Deactivated successfully.
Jan 31 05:39:51 np0005603787 podman[259148]: 2026-01-31 10:39:51.466749261 +0000 UTC m=+0.565278435 container died 892ed934a22f9ff581106773ef0d72e1c1af901bc75afedc9548e085157552af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_torvalds, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 05:39:51 np0005603787 systemd[1]: var-lib-containers-storage-overlay-d47be29f50484ddf8ecb9bdff6550dc92b750422b4059c02a99ddfb3ac6590af-merged.mount: Deactivated successfully.
Jan 31 05:39:51 np0005603787 podman[259148]: 2026-01-31 10:39:51.505998552 +0000 UTC m=+0.604527706 container remove 892ed934a22f9ff581106773ef0d72e1c1af901bc75afedc9548e085157552af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 05:39:51 np0005603787 systemd[1]: libpod-conmon-892ed934a22f9ff581106773ef0d72e1c1af901bc75afedc9548e085157552af.scope: Deactivated successfully.
Jan 31 05:39:51 np0005603787 podman[259258]: 2026-01-31 10:39:51.94502877 +0000 UTC m=+0.046105790 container create 9e4d74768967410fe25a325e400140db33339681b080d9137fd5d1603be14c70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 05:39:51 np0005603787 systemd[1]: Started libpod-conmon-9e4d74768967410fe25a325e400140db33339681b080d9137fd5d1603be14c70.scope.
Jan 31 05:39:52 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:39:52 np0005603787 podman[259258]: 2026-01-31 10:39:52.018735811 +0000 UTC m=+0.119812841 container init 9e4d74768967410fe25a325e400140db33339681b080d9137fd5d1603be14c70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hoover, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 05:39:52 np0005603787 podman[259258]: 2026-01-31 10:39:51.926235276 +0000 UTC m=+0.027312326 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:39:52 np0005603787 podman[259258]: 2026-01-31 10:39:52.023122661 +0000 UTC m=+0.124199681 container start 9e4d74768967410fe25a325e400140db33339681b080d9137fd5d1603be14c70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hoover, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 05:39:52 np0005603787 reverent_hoover[259275]: 167 167
Jan 31 05:39:52 np0005603787 systemd[1]: libpod-9e4d74768967410fe25a325e400140db33339681b080d9137fd5d1603be14c70.scope: Deactivated successfully.
Jan 31 05:39:52 np0005603787 podman[259258]: 2026-01-31 10:39:52.029438714 +0000 UTC m=+0.130515734 container attach 9e4d74768967410fe25a325e400140db33339681b080d9137fd5d1603be14c70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hoover, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 05:39:52 np0005603787 podman[259258]: 2026-01-31 10:39:52.029796654 +0000 UTC m=+0.130873674 container died 9e4d74768967410fe25a325e400140db33339681b080d9137fd5d1603be14c70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hoover, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:39:52 np0005603787 systemd[1]: var-lib-containers-storage-overlay-14c946ba3f7830096693286f1198de98e7da27d83f5ac036ed1b902b35424993-merged.mount: Deactivated successfully.
Jan 31 05:39:52 np0005603787 podman[259258]: 2026-01-31 10:39:52.093207235 +0000 UTC m=+0.194284275 container remove 9e4d74768967410fe25a325e400140db33339681b080d9137fd5d1603be14c70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 05:39:52 np0005603787 systemd[1]: libpod-conmon-9e4d74768967410fe25a325e400140db33339681b080d9137fd5d1603be14c70.scope: Deactivated successfully.
Jan 31 05:39:52 np0005603787 podman[259301]: 2026-01-31 10:39:52.237394301 +0000 UTC m=+0.046331856 container create dea5d83f54ddd1904d89d6ace9970033858b9a1c38af989808c15e34bb12625d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_shamir, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:39:52 np0005603787 systemd[1]: Started libpod-conmon-dea5d83f54ddd1904d89d6ace9970033858b9a1c38af989808c15e34bb12625d.scope.
Jan 31 05:39:52 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:39:52 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e993726f5db511a9cc48ddf6c5df50e55c87cd93740097d50b1862bcab64d85/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:39:52 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e993726f5db511a9cc48ddf6c5df50e55c87cd93740097d50b1862bcab64d85/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:39:52 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e993726f5db511a9cc48ddf6c5df50e55c87cd93740097d50b1862bcab64d85/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:39:52 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e993726f5db511a9cc48ddf6c5df50e55c87cd93740097d50b1862bcab64d85/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:39:52 np0005603787 podman[259301]: 2026-01-31 10:39:52.216836071 +0000 UTC m=+0.025773636 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:39:52 np0005603787 podman[259301]: 2026-01-31 10:39:52.322445464 +0000 UTC m=+0.131383099 container init dea5d83f54ddd1904d89d6ace9970033858b9a1c38af989808c15e34bb12625d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_shamir, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 05:39:52 np0005603787 podman[259301]: 2026-01-31 10:39:52.331911383 +0000 UTC m=+0.140848938 container start dea5d83f54ddd1904d89d6ace9970033858b9a1c38af989808c15e34bb12625d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 05:39:52 np0005603787 podman[259301]: 2026-01-31 10:39:52.336140068 +0000 UTC m=+0.145077663 container attach dea5d83f54ddd1904d89d6ace9970033858b9a1c38af989808c15e34bb12625d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_shamir, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]: {
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:    "0": [
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:        {
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:            "devices": [
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "/dev/loop3"
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:            ],
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:            "lv_name": "ceph_lv0",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:            "lv_size": "21470642176",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4a39e342-98b4-4260-a68a-c160a0fcb60c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:            "lv_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:            "name": "ceph_lv0",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:            "tags": {
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "ceph.block_uuid": "ud1dyv-aNXj-CiLR-WYhr-bB8n-1VKb-czg4ZK",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "ceph.cluster_name": "ceph",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "ceph.crush_device_class": "",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "ceph.encrypted": "0",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "ceph.objectstore": "bluestore",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "ceph.osd_fsid": "4a39e342-98b4-4260-a68a-c160a0fcb60c",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "ceph.osd_id": "0",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "ceph.type": "block",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "ceph.vdo": "0",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "ceph.with_tpm": "0"
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:            },
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:            "type": "block",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:            "vg_name": "ceph_vg0"
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:        }
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:    ],
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:    "1": [
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:        {
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:            "devices": [
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "/dev/loop4"
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:            ],
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:            "lv_name": "ceph_lv1",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:            "lv_size": "21470642176",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6af7a565-fb2b-4a54-af6d-dd6e6079328b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:            "lv_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:            "name": "ceph_lv1",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:            "tags": {
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "ceph.block_uuid": "ofD98R-zcby-xKQl-yErN-Ndz5-95ZB-yIsPX9",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "ceph.cluster_name": "ceph",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "ceph.crush_device_class": "",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "ceph.encrypted": "0",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "ceph.objectstore": "bluestore",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "ceph.osd_fsid": "6af7a565-fb2b-4a54-af6d-dd6e6079328b",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "ceph.osd_id": "1",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "ceph.type": "block",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "ceph.vdo": "0",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "ceph.with_tpm": "0"
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:            },
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:            "type": "block",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:            "vg_name": "ceph_vg1"
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:        }
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:    ],
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:    "2": [
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:        {
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:            "devices": [
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "/dev/loop5"
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:            ],
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:            "lv_name": "ceph_lv2",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:            "lv_size": "21470642176",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=962d77ae-dc67-5de8-89d8-3d1670c67b61,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=446dbac2-6402-4180-8661-54a9bd1028fb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:            "lv_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:            "name": "ceph_lv2",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:            "tags": {
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "ceph.block_uuid": "RVDbRu-CiyS-0VDh-DrQQ-R0pA-H2Dd-aVsCcP",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "ceph.cephx_lockbox_secret": "",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "ceph.cluster_fsid": "962d77ae-dc67-5de8-89d8-3d1670c67b61",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "ceph.cluster_name": "ceph",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "ceph.crush_device_class": "",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "ceph.encrypted": "0",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "ceph.objectstore": "bluestore",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "ceph.osd_fsid": "446dbac2-6402-4180-8661-54a9bd1028fb",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "ceph.osd_id": "2",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "ceph.type": "block",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "ceph.vdo": "0",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:                "ceph.with_tpm": "0"
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:            },
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:            "type": "block",
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:            "vg_name": "ceph_vg2"
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:        }
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]:    ]
Jan 31 05:39:52 np0005603787 agitated_shamir[259317]: }
Jan 31 05:39:52 np0005603787 systemd[1]: libpod-dea5d83f54ddd1904d89d6ace9970033858b9a1c38af989808c15e34bb12625d.scope: Deactivated successfully.
Jan 31 05:39:52 np0005603787 podman[259301]: 2026-01-31 10:39:52.62298789 +0000 UTC m=+0.431925455 container died dea5d83f54ddd1904d89d6ace9970033858b9a1c38af989808c15e34bb12625d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_shamir, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:39:52 np0005603787 systemd[1]: var-lib-containers-storage-overlay-0e993726f5db511a9cc48ddf6c5df50e55c87cd93740097d50b1862bcab64d85-merged.mount: Deactivated successfully.
Jan 31 05:39:52 np0005603787 podman[259301]: 2026-01-31 10:39:52.685020773 +0000 UTC m=+0.493958328 container remove dea5d83f54ddd1904d89d6ace9970033858b9a1c38af989808c15e34bb12625d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_shamir, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Jan 31 05:39:52 np0005603787 systemd[1]: libpod-conmon-dea5d83f54ddd1904d89d6ace9970033858b9a1c38af989808c15e34bb12625d.scope: Deactivated successfully.
Jan 31 05:39:53 np0005603787 podman[259403]: 2026-01-31 10:39:53.135571335 +0000 UTC m=+0.050789318 container create b8206d2fb920b0a020ce088f04ff7a86ada46b675921a5d7d95af6758bf368de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_heyrovsky, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:39:53 np0005603787 systemd[1]: Started libpod-conmon-b8206d2fb920b0a020ce088f04ff7a86ada46b675921a5d7d95af6758bf368de.scope.
Jan 31 05:39:53 np0005603787 podman[259403]: 2026-01-31 10:39:53.106260205 +0000 UTC m=+0.021478278 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:39:53 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:39:53 np0005603787 podman[259403]: 2026-01-31 10:39:53.225310585 +0000 UTC m=+0.140528598 container init b8206d2fb920b0a020ce088f04ff7a86ada46b675921a5d7d95af6758bf368de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_heyrovsky, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:39:53 np0005603787 podman[259403]: 2026-01-31 10:39:53.23136355 +0000 UTC m=+0.146581513 container start b8206d2fb920b0a020ce088f04ff7a86ada46b675921a5d7d95af6758bf368de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2)
Jan 31 05:39:53 np0005603787 vigilant_heyrovsky[259419]: 167 167
Jan 31 05:39:53 np0005603787 systemd[1]: libpod-b8206d2fb920b0a020ce088f04ff7a86ada46b675921a5d7d95af6758bf368de.scope: Deactivated successfully.
Jan 31 05:39:53 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:39:53 np0005603787 podman[259403]: 2026-01-31 10:39:53.239317697 +0000 UTC m=+0.154535760 container attach b8206d2fb920b0a020ce088f04ff7a86ada46b675921a5d7d95af6758bf368de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:39:53 np0005603787 podman[259403]: 2026-01-31 10:39:53.23979575 +0000 UTC m=+0.155013773 container died b8206d2fb920b0a020ce088f04ff7a86ada46b675921a5d7d95af6758bf368de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_heyrovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:39:53 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:39:53 np0005603787 systemd[1]: var-lib-containers-storage-overlay-f57202bdb23559504c2d633924e39e6b59d8b29d0ecf22f5d954aab53077646e-merged.mount: Deactivated successfully.
Jan 31 05:39:53 np0005603787 podman[259403]: 2026-01-31 10:39:53.294018691 +0000 UTC m=+0.209236674 container remove b8206d2fb920b0a020ce088f04ff7a86ada46b675921a5d7d95af6758bf368de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_heyrovsky, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:39:53 np0005603787 systemd[1]: libpod-conmon-b8206d2fb920b0a020ce088f04ff7a86ada46b675921a5d7d95af6758bf368de.scope: Deactivated successfully.
Jan 31 05:39:53 np0005603787 podman[259444]: 2026-01-31 10:39:53.449816514 +0000 UTC m=+0.046228013 container create 8bd21ff31080b08def733b99acc20499cfdad5c242df1155b7a824606d1df8be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_turing, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:39:53 np0005603787 systemd[1]: Started libpod-conmon-8bd21ff31080b08def733b99acc20499cfdad5c242df1155b7a824606d1df8be.scope.
Jan 31 05:39:53 np0005603787 systemd[1]: Started libcrun container.
Jan 31 05:39:53 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f667a1d12f25c3d34c2ef885568380669a3dfcbb096e4dc6369e4de709bf7e18/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:39:53 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f667a1d12f25c3d34c2ef885568380669a3dfcbb096e4dc6369e4de709bf7e18/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:39:53 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f667a1d12f25c3d34c2ef885568380669a3dfcbb096e4dc6369e4de709bf7e18/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:39:53 np0005603787 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f667a1d12f25c3d34c2ef885568380669a3dfcbb096e4dc6369e4de709bf7e18/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:39:53 np0005603787 podman[259444]: 2026-01-31 10:39:53.429003216 +0000 UTC m=+0.025414765 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:39:53 np0005603787 podman[259444]: 2026-01-31 10:39:53.538847415 +0000 UTC m=+0.135258944 container init 8bd21ff31080b08def733b99acc20499cfdad5c242df1155b7a824606d1df8be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 05:39:53 np0005603787 podman[259444]: 2026-01-31 10:39:53.545145827 +0000 UTC m=+0.141557366 container start 8bd21ff31080b08def733b99acc20499cfdad5c242df1155b7a824606d1df8be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_turing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 05:39:53 np0005603787 podman[259444]: 2026-01-31 10:39:53.553043053 +0000 UTC m=+0.149454602 container attach 8bd21ff31080b08def733b99acc20499cfdad5c242df1155b7a824606d1df8be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_turing, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:39:54 np0005603787 lvm[259538]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:39:54 np0005603787 lvm[259539]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:39:54 np0005603787 lvm[259538]: VG ceph_vg0 finished
Jan 31 05:39:54 np0005603787 lvm[259539]: VG ceph_vg1 finished
Jan 31 05:39:54 np0005603787 lvm[259541]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:39:54 np0005603787 lvm[259541]: VG ceph_vg2 finished
Jan 31 05:39:54 np0005603787 vigilant_turing[259460]: {}
Jan 31 05:39:54 np0005603787 systemd[1]: libpod-8bd21ff31080b08def733b99acc20499cfdad5c242df1155b7a824606d1df8be.scope: Deactivated successfully.
Jan 31 05:39:54 np0005603787 podman[259444]: 2026-01-31 10:39:54.288899034 +0000 UTC m=+0.885310543 container died 8bd21ff31080b08def733b99acc20499cfdad5c242df1155b7a824606d1df8be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_turing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 05:39:54 np0005603787 systemd[1]: libpod-8bd21ff31080b08def733b99acc20499cfdad5c242df1155b7a824606d1df8be.scope: Consumed 1.091s CPU time.
Jan 31 05:39:54 np0005603787 systemd[1]: var-lib-containers-storage-overlay-f667a1d12f25c3d34c2ef885568380669a3dfcbb096e4dc6369e4de709bf7e18-merged.mount: Deactivated successfully.
Jan 31 05:39:54 np0005603787 podman[259444]: 2026-01-31 10:39:54.338666953 +0000 UTC m=+0.935078492 container remove 8bd21ff31080b08def733b99acc20499cfdad5c242df1155b7a824606d1df8be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:39:54 np0005603787 systemd[1]: libpod-conmon-8bd21ff31080b08def733b99acc20499cfdad5c242df1155b7a824606d1df8be.scope: Deactivated successfully.
Jan 31 05:39:54 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:39:54 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:39:54 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:39:54 np0005603787 ceph-mon[75160]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:39:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:39:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:39:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:39:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:39:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:39:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:39:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:39:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:39:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:39:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:39:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.875184099073559e-07 of space, bias 1.0, pg target 0.00014625552297220677 quantized to 32 (current 32)
Jan 31 05:39:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:39:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4951578875402149e-06 of space, bias 4.0, pg target 0.0017941894650482578 quantized to 16 (current 16)
Jan 31 05:39:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:39:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:39:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:39:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:39:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:39:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 05:39:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:39:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:39:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:39:54 np0005603787 ceph-mgr[75453]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:39:54 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:39:54 np0005603787 ceph-mon[75160]: from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' 
Jan 31 05:39:55 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:39:57 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1398: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:39:58 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:39:58 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Jan 31 05:39:58 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:39:58.246803) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 05:39:58 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Jan 31 05:39:58 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769855998246859, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1094, "num_deletes": 251, "total_data_size": 1627678, "memory_usage": 1653984, "flush_reason": "Manual Compaction"}
Jan 31 05:39:58 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Jan 31 05:39:58 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769855998267822, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 989293, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27751, "largest_seqno": 28844, "table_properties": {"data_size": 985084, "index_size": 1797, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 11178, "raw_average_key_size": 20, "raw_value_size": 975861, "raw_average_value_size": 1824, "num_data_blocks": 82, "num_entries": 535, "num_filter_entries": 535, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769855895, "oldest_key_time": 1769855895, "file_creation_time": 1769855998, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a9067dea-12fb-43c2-8d5c-dbf66227f0e8", "db_session_id": "EXKALWXQ4I64EWVUMKE5", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:39:58 np0005603787 ceph-mon[75160]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 21057 microseconds, and 2308 cpu microseconds.
Jan 31 05:39:58 np0005603787 ceph-mon[75160]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 05:39:58 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:39:58.267873) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 989293 bytes OK
Jan 31 05:39:58 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:39:58.267893) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Jan 31 05:39:58 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:39:58.273262) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Jan 31 05:39:58 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:39:58.273290) EVENT_LOG_v1 {"time_micros": 1769855998273283, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 05:39:58 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:39:58.273312) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 05:39:58 np0005603787 ceph-mon[75160]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1622577, prev total WAL file size 1622577, number of live WAL files 2.
Jan 31 05:39:58 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:39:58 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:39:58.273983) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303032' seq:72057594037927935, type:22 .. '6D6772737461740031323533' seq:0, type:0; will stop at (end)
Jan 31 05:39:58 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 05:39:58 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(966KB)], [62(9199KB)]
Jan 31 05:39:58 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769855998274057, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 10409795, "oldest_snapshot_seqno": -1}
Jan 31 05:39:58 np0005603787 ceph-mon[75160]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5209 keys, 7680672 bytes, temperature: kUnknown
Jan 31 05:39:58 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769855998332714, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 7680672, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7647256, "index_size": 19290, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13061, "raw_key_size": 129149, "raw_average_key_size": 24, "raw_value_size": 7554503, "raw_average_value_size": 1450, "num_data_blocks": 801, "num_entries": 5209, "num_filter_entries": 5209, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769853439, "oldest_key_time": 0, "file_creation_time": 1769855998, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a9067dea-12fb-43c2-8d5c-dbf66227f0e8", "db_session_id": "EXKALWXQ4I64EWVUMKE5", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:39:58 np0005603787 ceph-mon[75160]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 05:39:58 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:39:58.332982) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 7680672 bytes
Jan 31 05:39:58 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:39:58.338509) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 177.3 rd, 130.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 9.0 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(18.3) write-amplify(7.8) OK, records in: 5681, records dropped: 472 output_compression: NoCompression
Jan 31 05:39:58 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:39:58.338547) EVENT_LOG_v1 {"time_micros": 1769855998338533, "job": 34, "event": "compaction_finished", "compaction_time_micros": 58722, "compaction_time_cpu_micros": 16675, "output_level": 6, "num_output_files": 1, "total_output_size": 7680672, "num_input_records": 5681, "num_output_records": 5209, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 05:39:58 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:39:58 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769855998338812, "job": 34, "event": "table_file_deletion", "file_number": 64}
Jan 31 05:39:58 np0005603787 ceph-mon[75160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:39:58 np0005603787 ceph-mon[75160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769855998339783, "job": 34, "event": "table_file_deletion", "file_number": 62}
Jan 31 05:39:58 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:39:58.273825) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:39:58 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:39:58.339849) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:39:58 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:39:58.339857) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:39:58 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:39:58.339860) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:39:58 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:39:58.339863) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:39:58 np0005603787 ceph-mon[75160]: rocksdb: (Original Log Time 2026/01/31-10:39:58.339866) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 05:39:59 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:39:59 np0005603787 podman[259582]: 2026-01-31 10:39:59.854925374 +0000 UTC m=+0.063610027 container health_status e94e0b74be7640d51f18fd1edca231beeb8f516dfa578782a0f41c3a17fcea4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 05:39:59 np0005603787 podman[259581]: 2026-01-31 10:39:59.909057512 +0000 UTC m=+0.116107331 container health_status da9f83d03220069d3a55c3ca5393ad93c48dde1c9327b5a52028c39d7449d209 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9a9718f85ff28ec53e499699c8f1f02d3a9c225ac4fffd09e348aa9c1ce8c2ad-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe-60ae3a4143784c57bc0b7c5d6d4e3aa5ad20092be5f03ba96019c2a04d18f2fe'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 05:40:01 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:40:03 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:40:03 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:40:05 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1402: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:40:07 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1403: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:40:07 np0005603787 systemd-logind[786]: New session 56 of user zuul.
Jan 31 05:40:07 np0005603787 systemd[1]: Started Session 56 of User zuul.
Jan 31 05:40:08 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:40:09 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1404: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:40:10 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14600 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:40:11 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14602 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:40:11 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1405: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:40:11 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 31 05:40:11 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3488756715' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 31 05:40:13 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:40:13 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1406: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:40:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:40:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:40:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:40:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:40:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:40:13 np0005603787 ceph-mgr[75453]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:40:15 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1407: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:40:17 np0005603787 nova_compute[238603]: 2026-01-31 10:40:17.121 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:40:17 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1408: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:40:17 np0005603787 ovs-vsctl[259954]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 31 05:40:18 np0005603787 nova_compute[238603]: 2026-01-31 10:40:18.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:40:18 np0005603787 virtqemud[238904]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 31 05:40:18 np0005603787 virtqemud[238904]: hostname: compute-0
Jan 31 05:40:18 np0005603787 virtqemud[238904]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 31 05:40:18 np0005603787 virtqemud[238904]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 31 05:40:18 np0005603787 virtqemud[238904]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 31 05:40:18 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:40:19 np0005603787 ceph-mds[95101]: mds.cephfs.compute-0.nykocs asok_command: cache status {prefix=cache status} (starting...)
Jan 31 05:40:19 np0005603787 nova_compute[238603]: 2026-01-31 10:40:19.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:40:19 np0005603787 nova_compute[238603]: 2026-01-31 10:40:19.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:40:19 np0005603787 nova_compute[238603]: 2026-01-31 10:40:19.103 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 05:40:19 np0005603787 ceph-mds[95101]: mds.cephfs.compute-0.nykocs asok_command: client ls {prefix=client ls} (starting...)
Jan 31 05:40:19 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1409: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:40:19 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14606 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:40:19 np0005603787 lvm[260311]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:40:19 np0005603787 lvm[260311]: VG ceph_vg2 finished
Jan 31 05:40:19 np0005603787 lvm[260318]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:40:19 np0005603787 lvm[260318]: VG ceph_vg1 finished
Jan 31 05:40:19 np0005603787 lvm[260330]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:40:19 np0005603787 lvm[260330]: VG ceph_vg0 finished
Jan 31 05:40:19 np0005603787 ceph-mds[95101]: mds.cephfs.compute-0.nykocs asok_command: damage ls {prefix=damage ls} (starting...)
Jan 31 05:40:19 np0005603787 ceph-mds[95101]: mds.cephfs.compute-0.nykocs asok_command: dump loads {prefix=dump loads} (starting...)
Jan 31 05:40:19 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14608 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:40:19 np0005603787 ceph-mds[95101]: mds.cephfs.compute-0.nykocs asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 31 05:40:20 np0005603787 nova_compute[238603]: 2026-01-31 10:40:20.103 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:40:20 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14610 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:40:20 np0005603787 ceph-mds[95101]: mds.cephfs.compute-0.nykocs asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 31 05:40:20 np0005603787 ceph-mds[95101]: mds.cephfs.compute-0.nykocs asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 31 05:40:20 np0005603787 ceph-mds[95101]: mds.cephfs.compute-0.nykocs asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 31 05:40:20 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14612 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:40:20 np0005603787 ceph-mgr[75453]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 05:40:20 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mgr-compute-0-mdmqaq[75449]: 2026-01-31T10:40:20.800+0000 7f445bfd4640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 05:40:20 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Jan 31 05:40:20 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2338861397' entity='client.admin' cmd={"prefix": "report"} : dispatch
Jan 31 05:40:20 np0005603787 ceph-mds[95101]: mds.cephfs.compute-0.nykocs asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 31 05:40:21 np0005603787 ceph-mds[95101]: mds.cephfs.compute-0.nykocs asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 31 05:40:21 np0005603787 nova_compute[238603]: 2026-01-31 10:40:21.102 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:40:21 np0005603787 nova_compute[238603]: 2026-01-31 10:40:21.103 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 05:40:21 np0005603787 nova_compute[238603]: 2026-01-31 10:40:21.103 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 05:40:21 np0005603787 nova_compute[238603]: 2026-01-31 10:40:21.128 238607 DEBUG nova.compute.manager [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 05:40:21 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1410: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:40:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:40:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3575178955' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:40:21 np0005603787 ceph-mds[95101]: mds.cephfs.compute-0.nykocs asok_command: ops {prefix=ops} (starting...)
Jan 31 05:40:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Jan 31 05:40:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/315052953' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Jan 31 05:40:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 05:40:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1844405198' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 05:40:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 05:40:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1844405198' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 05:40:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Jan 31 05:40:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3347291020' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Jan 31 05:40:21 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 31 05:40:21 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2449309886' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 31 05:40:21 np0005603787 ceph-mds[95101]: mds.cephfs.compute-0.nykocs asok_command: session ls {prefix=session ls} (starting...)
Jan 31 05:40:22 np0005603787 ceph-mds[95101]: mds.cephfs.compute-0.nykocs asok_command: status {prefix=status} (starting...)
Jan 31 05:40:22 np0005603787 nova_compute[238603]: 2026-01-31 10:40:22.123 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:40:22 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 31 05:40:22 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3701163147' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 31 05:40:22 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Jan 31 05:40:22 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1825610713' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Jan 31 05:40:22 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 31 05:40:22 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1033131147' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 31 05:40:23 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14634 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:40:23 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1411: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:40:23 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:40:23 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 31 05:40:23 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/304460643' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 31 05:40:23 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14638 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:40:24 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 31 05:40:24 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1059357680' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 31 05:40:24 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Jan 31 05:40:24 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2477805785' entity='client.admin' cmd={"prefix": "features"} : dispatch
Jan 31 05:40:24 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 31 05:40:24 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3149207089' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 31 05:40:24 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Jan 31 05:40:24 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4207108304' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Jan 31 05:40:25 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14648 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:40:25 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14650 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:40:25 np0005603787 ceph-mgr[75453]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 05:40:25 np0005603787 ceph-962d77ae-dc67-5de8-89d8-3d1670c67b61-mgr-compute-0-mdmqaq[75449]: 2026-01-31T10:40:25.160+0000 7f445bfd4640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 05:40:25 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:40:25 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14652 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 778240 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 778240 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 778240 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 770048 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 770048 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71663616 unmapped: 761856 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 753664 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 753664 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 745472 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 745472 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 745472 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71688192 unmapped: 737280 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71688192 unmapped: 737280 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71696384 unmapped: 729088 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71696384 unmapped: 729088 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71712768 unmapped: 712704 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71712768 unmapped: 712704 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71712768 unmapped: 712704 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 704512 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 704512 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 704512 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 696320 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 696320 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 688128 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 688128 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 688128 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71745536 unmapped: 679936 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71745536 unmapped: 679936 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71753728 unmapped: 671744 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71753728 unmapped: 671744 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71753728 unmapped: 671744 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 663552 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 663552 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 647168 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 647168 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 638976 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 638976 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71794688 unmapped: 630784 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71794688 unmapped: 630784 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71794688 unmapped: 630784 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 622592 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 622592 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 622592 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 614400 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 614400 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 614400 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71819264 unmapped: 606208 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71819264 unmapped: 606208 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71819264 unmapped: 606208 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71827456 unmapped: 598016 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71827456 unmapped: 598016 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71835648 unmapped: 589824 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71835648 unmapped: 589824 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71852032 unmapped: 573440 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71852032 unmapped: 573440 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71852032 unmapped: 573440 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 565248 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 565248 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 557056 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71876608 unmapped: 548864 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71876608 unmapped: 548864 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71884800 unmapped: 540672 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71884800 unmapped: 540672 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71892992 unmapped: 532480 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71892992 unmapped: 532480 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71892992 unmapped: 532480 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 524288 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 524288 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71909376 unmapped: 516096 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71909376 unmapped: 516096 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71909376 unmapped: 516096 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 507904 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 507904 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 507904 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71925760 unmapped: 499712 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71925760 unmapped: 499712 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 491520 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 491520 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71942144 unmapped: 483328 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 475136 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 475136 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 466944 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 466944 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 458752 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 466944 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 466944 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 458752 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 458752 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 458752 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 450560 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 450560 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 442368 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 442368 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 434176 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 434176 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71999488 unmapped: 425984 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71999488 unmapped: 425984 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 71999488 unmapped: 425984 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 417792 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 409600 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 409600 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.3 total, 600.0 interval#012Cumulative writes: 5425 writes, 23K keys, 5425 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5425 writes, 783 syncs, 6.93 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5425 writes, 23K keys, 5425 commit groups, 1.0 writes per commit group, ingest: 18.52 MB, 0.03 MB/s#012Interval WAL: 5425 writes, 783 syncs, 6.93 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c8dcfd98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c8dcfd98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 344064 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 344064 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 335872 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 327680 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 327680 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 319488 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 319488 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 311296 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 303104 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 303104 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 294912 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 294912 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 294912 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 286720 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 286720 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 286720 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 278528 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 278528 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 270336 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 270336 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 262144 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 262144 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 253952 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 237568 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 237568 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 237568 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 229376 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 229376 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 221184 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 221184 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 221184 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 212992 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 212992 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 196608 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 196608 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 188416 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 188416 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 180224 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 180224 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 180224 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 172032 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72261632 unmapped: 163840 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 155648 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 147456 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 147456 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 147456 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 139264 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 139264 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72302592 unmapped: 122880 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72302592 unmapped: 122880 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72302592 unmapped: 122880 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 276.491088867s of 276.499969482s, submitted: 4
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 106496 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [0,0,0,0,0,0,1])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 1794048 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 1794048 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 1794048 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 1794048 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 1777664 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 1769472 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 1769472 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 1761280 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 1761280 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 1753088 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 1753088 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 1744896 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 1744896 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 1736704 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73842688 unmapped: 1728512 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 1720320 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 1712128 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 1712128 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 1703936 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 1703936 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 1703936 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 1695744 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 1695744 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1679360 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1671168 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1662976 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1662976 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1654784 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1654784 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1646592 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1646592 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73932800 unmapped: 1638400 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73932800 unmapped: 1638400 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1630208 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1630208 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1630208 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 1622016 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 1622016 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 1613824 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1605632 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1605632 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 1597440 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 1597440 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 1581056 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1572864 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1572864 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 1564672 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 1564672 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74014720 unmapped: 1556480 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74014720 unmapped: 1556480 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1548288 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 1540096 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 1540096 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 1540096 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1531904 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1531904 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1523712 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1523712 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 1507328 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 1507328 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 1507328 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 1499136 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 1499136 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 1474560 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 1474560 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 1466368 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 1466368 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 1466368 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 1458176 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 1458176 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 1449984 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 1449984 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 1449984 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 1449984 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 1433600 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 1433600 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 1433600 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 1433600 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 1433600 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 1433600 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 1433600 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 1433600 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 1433600 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 1409024 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 1409024 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 1400832 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 1400832 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 1400832 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 1400832 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 1400832 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 1392640 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 1392640 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 1392640 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 1392640 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 1392640 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 1384448 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 1384448 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 1384448 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 1384448 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 1384448 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 1384448 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 1384448 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 1384448 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 1368064 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 1368064 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 1359872 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 1359872 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 1359872 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 1359872 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 1359872 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 1359872 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 1359872 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 1359872 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 1359872 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 1359872 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 1359872 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 1359872 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 1359872 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 1359872 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 1359872 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1351680 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1351680 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1351680 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1335296 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1335296 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1335296 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1335296 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1335296 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1335296 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1335296 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 1318912 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 1318912 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 1318912 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1286144 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1286144 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1286144 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1277952 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1277952 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1277952 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1277952 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1277952 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1277952 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1277952 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1261568 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1261568 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1261568 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1261568 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1261568 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1261568 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1261568 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1261568 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1261568 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1261568 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1261568 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1261568 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1261568 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1261568 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1261568 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1261568 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1261568 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1261568 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1261568 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1261568 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1245184 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 1236992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 1236992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 1236992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 1236992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 1236992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1228800 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1228800 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1228800 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1228800 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1228800 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1220608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1220608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1220608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1220608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1220608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1220608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1220608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1220608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1220608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 1204224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1220608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1220608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1220608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1220608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1220608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1212416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1212416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1212416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1212416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1212416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1212416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1212416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1212416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1212416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1212416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1212416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1212416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1212416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1212416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1196032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1196032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1196032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1196032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1196032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1196032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1187840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1187840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1187840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1187840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1187840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1179648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1179648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1179648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1179648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1179648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 1171456 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 1171456 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 1171456 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 1171456 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 1155072 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 1155072 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 1155072 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 1155072 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 1155072 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 1155072 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 1155072 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 1155072 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 1155072 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 1155072 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 1155072 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 1155072 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 1155072 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 1155072 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 1155072 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 1155072 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 1155072 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 1155072 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 1155072 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 1155072 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 1138688 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 1138688 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 1138688 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 1138688 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 1138688 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 1138688 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 1138688 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 1138688 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 1138688 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 1138688 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 1122304 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 1122304 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 1122304 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 1122304 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 1122304 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: mgrc ms_handle_reset ms_handle_reset con 0x55c8ded10000
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2732794987
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2732794987,v1:192.168.122.100:6801/2732794987]
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: mgrc handle_mgr_configure stats_period=5
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 1024000 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 1024000 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 1024000 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 1024000 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 1024000 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 1024000 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 1024000 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1187840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1187840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1187840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1187840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1187840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1187840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1187840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1187840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1187840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1187840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1187840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1187840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1187840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1187840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1187840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1187840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1187840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1187840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1187840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1187840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1187840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1187840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1187840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1187840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1179648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1179648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1179648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1179648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1179648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943446 data_alloc: 218103808 data_used: 5611
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 1171456 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 1171456 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 1171456 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 299.261657715s of 300.132904053s, submitted: 90
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 1138688 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 1138688 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 1130496 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 1130496 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 1130496 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 1130496 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 1130496 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 1122304 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 1122304 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 1122304 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 1122304 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 1122304 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 1122304 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 1122304 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 1122304 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 1122304 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1114112 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 1097728 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 1097728 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 1097728 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 1097728 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 1097728 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 1097728 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 1097728 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 1097728 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 1097728 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 1097728 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1089536 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1089536 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1089536 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1089536 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1089536 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1089536 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1089536 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 1081344 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 1081344 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 1081344 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 1064960 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 1064960 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 1064960 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 1064960 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 1064960 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 1064960 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 1064960 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 1064960 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 1064960 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 1064960 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 1064960 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 1064960 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 1064960 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 1064960 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 1064960 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 1064960 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 1064960 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 1064960 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 1064960 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 1064960 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 1048576 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 1048576 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 1048576 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 1048576 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 1048576 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 1048576 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 1048576 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 1048576 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 1048576 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 1048576 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 1048576 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 1048576 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 1048576 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 1048576 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 1048576 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 1048576 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 1048576 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 1048576 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 1048576 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 1048576 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 1032192 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 1032192 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 1032192 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 1032192 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 1032192 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 1032192 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 1032192 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 1032192 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 1032192 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 1032192 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 1032192 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 1024000 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 1024000 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 1024000 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 1024000 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 1024000 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 1024000 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 1024000 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 1024000 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 1024000 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 1007616 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 1007616 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 1007616 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 1007616 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 1007616 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 1007616 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 1007616 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 1007616 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 1007616 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 1007616 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 1007616 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 999424 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 999424 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 999424 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 999424 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 999424 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 991232 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 991232 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 991232 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 974848 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 974848 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 974848 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 974848 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 974848 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 974848 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 974848 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 974848 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 974848 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 974848 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 974848 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 974848 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 974848 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 974848 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 974848 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 974848 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 974848 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 974848 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 974848 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 974848 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 958464 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 958464 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 958464 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 958464 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 958464 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 958464 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 958464 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 958464 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 958464 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 958464 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 958464 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 942080 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 942080 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 942080 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 942080 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 942080 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 942080 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 942080 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 942080 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 942080 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 925696 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 925696 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 925696 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 925696 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 925696 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 925696 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 925696 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 917504 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 917504 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 917504 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 917504 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 917504 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 917504 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 917504 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 917504 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 917504 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 917504 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 917504 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 917504 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 917504 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1084532207' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 901120 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 901120 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 901120 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 901120 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 901120 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 901120 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 901120 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 884736 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 884736 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 884736 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 884736 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 884736 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 876544 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 876544 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 876544 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 876544 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 876544 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 876544 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 876544 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 876544 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 860160 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 860160 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 860160 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 860160 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.3 total, 600.0 interval#012Cumulative writes: 5653 writes, 24K keys, 5653 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5653 writes, 897 syncs, 6.30 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c8dcfd98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c8dcfd98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 827392 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 819200 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 819200 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 819200 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 819200 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 819200 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 819200 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 802816 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 802816 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 802816 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 802816 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 802816 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 802816 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 802816 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 802816 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 802816 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 786432 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 786432 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xbe0eb/0x17e000, compress 0x0/0x0/0x0, omap 0x10d97, meta 0x2bbf269), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 786432 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 786432 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944982 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 786432 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 124 handle_osd_map epochs [124,125], i have 125, src has [1,125]
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 270.715484619s of 270.762878418s, submitted: 24
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 778240 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 753664 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 127 ms_handle_reset con 0x55c8e10ac800 session 0x55c8e1448380
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 9936896 heap: 84885504 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fca2d000/0x0/0x4ffc00000, data 0x533475/0x5f9000, compress 0x0/0x0/0x0, omap 0x11604, meta 0x2bbe9fc), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 8724480 heap: 84885504 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983424 data_alloc: 218103808 data_used: 7449
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 25583616 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 127 handle_osd_map epochs [127,128], i have 127, src has [1,128]
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 128 ms_handle_reset con 0x55c8e10acc00 session 0x55c8e1449340
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 25559040 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 25559040 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fbdbd000/0x0/0x4ffc00000, data 0x11a5050/0x126d000, compress 0x0/0x0/0x0, omap 0x11c8d, meta 0x2bbe373), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 25559040 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fbdbd000/0x0/0x4ffc00000, data 0x11a5050/0x126d000, compress 0x0/0x0/0x0, omap 0x11c8d, meta 0x2bbe373), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 25559040 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053810 data_alloc: 218103808 data_used: 8034
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 25559040 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 25542656 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fbdba000/0x0/0x4ffc00000, data 0x11a6acf/0x1270000, compress 0x0/0x0/0x0, omap 0x11f65, meta 0x2bbe09b), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 25542656 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 25542656 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.200855255s of 12.615010262s, submitted: 60
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 25370624 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1055322 data_alloc: 218103808 data_used: 8034
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 129 handle_osd_map epochs [129,130], i have 129, src has [1,130]
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 130 ms_handle_reset con 0x55c8e10ad000 session 0x55c8df4bba40
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 25321472 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 25296896 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fc228000/0x0/0x4ffc00000, data 0xd38679/0xe01000, compress 0x0/0x0/0x0, omap 0x12266, meta 0x2bbdd9a), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 25296896 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fc228000/0x0/0x4ffc00000, data 0xd38679/0xe01000, compress 0x0/0x0/0x0, omap 0x12266, meta 0x2bbdd9a), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 131 ms_handle_reset con 0x55c8e10ad400 session 0x55c8e120c8c0
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974444 data_alloc: 218103808 data_used: 12095
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 131 handle_osd_map epochs [132,132], i have 132, src has [1,132]
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xcbce1/0x196000, compress 0x0/0x0/0x0, omap 0x12a12, meta 0x2bbd5ee), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977346 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.655462265s of 12.143756866s, submitted: 84
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 25165824 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 25165824 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce91000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 25165824 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 25165824 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980120 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 25165824 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 25051136 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread fragmentation_score=0.000143 took=0.000032s
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 24985600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 24977408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 24977408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 24977408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 24977408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 24977408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 24977408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 24977408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 24977408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 24977408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 24977408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 24977408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 24977408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 24977408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 24977408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 24977408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 24977408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 24977408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 24977408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 24977408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 24969216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 24969216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 24969216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 24969216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 24969216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 24969216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 24961024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 24952832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 24952832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 24952832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 24952832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 24952832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 24952832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 24952832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 24952832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 24952832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 24952832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 24952832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 24952832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 24952832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 24952832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 24952832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 24944640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 24944640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 24944640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 24944640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 24944640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 24944640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 24944640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 24944640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 24944640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 24944640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 24936448 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 24936448 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 24936448 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 24936448 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 24936448 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 24936448 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 24936448 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 24936448 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 24936448 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 24936448 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 24928256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 24920064 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 24911872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 24911872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 24911872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 24911872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 24911872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 24911872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 24911872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 24911872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 24911872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 24911872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 24911872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 24911872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 24911872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 24911872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 24911872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 24911872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 24911872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 24911872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 24911872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 24911872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 24911872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 24911872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 24911872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 24911872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 24911872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 24911872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 24911872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 24911872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 24911872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 24911872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 24911872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 24911872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 24911872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 24911872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 24911872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 24911872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 24911872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 24911872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 24895488 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 24895488 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 24895488 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 24895488 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 24895488 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 24895488 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 24895488 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 24895488 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 24895488 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 24895488 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 24895488 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 24895488 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 24895488 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 24895488 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 24895488 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 24887296 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 24887296 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 24887296 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 24887296 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 24887296 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 24870912 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 24870912 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 24870912 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 24870912 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 24870912 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 24870912 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 24870912 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 24870912 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 24870912 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 24870912 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 24870912 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 24870912 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 24870912 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 24870912 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 24870912 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 24870912 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 24862720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 24862720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 24862720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 24862720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 24862720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 24862720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 24862720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 24862720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 24862720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 24862720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 24862720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 24862720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 24862720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 24862720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 24862720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 24862720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 24862720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 24862720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 24862720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 24862720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 24862720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 24862720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 24862720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 24862720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 24862720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 24862720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 24862720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 24862720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 24862720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 24862720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 24862720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 24862720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce93000/0x0/0x4ffc00000, data 0xcd760/0x199000, compress 0x0/0x0/0x0, omap 0x12ce3, meta 0x2bbd31d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 24862720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 24862720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979400 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 24862720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 416.145324707s of 416.419555664s, submitted: 103
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76947456 unmapped: 24723456 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 76947456 unmapped: 24723456 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 134 ms_handle_reset con 0x55c8e12d7c00 session 0x55c8df2eea80
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 23683072 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 135 ms_handle_reset con 0x55c8e10ac800 session 0x55c8df4bbc00
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fc68e000/0x0/0x4ffc00000, data 0x8cf342/0x99e000, compress 0x0/0x0/0x0, omap 0x13224, meta 0x2bbcddc), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 22609920 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034182 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 22609920 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 79118336 unmapped: 22552576 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fc688000/0x0/0x4ffc00000, data 0x8d0f01/0x9a2000, compress 0x0/0x0/0x0, omap 0x135c9, meta 0x2bbca37), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 79118336 unmapped: 22552576 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 79118336 unmapped: 22552576 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 79118336 unmapped: 22552576 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034182 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 79134720 unmapped: 22536192 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 79134720 unmapped: 22536192 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 79134720 unmapped: 22536192 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fc688000/0x0/0x4ffc00000, data 0x8d0f01/0x9a2000, compress 0x0/0x0/0x0, omap 0x135c9, meta 0x2bbca37), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 79134720 unmapped: 22536192 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.711121559s of 12.785113335s, submitted: 27
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 79167488 unmapped: 22503424 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 135 handle_osd_map epochs [135,136], i have 136, src has [1,136]
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 136 ms_handle_reset con 0x55c8e0515400 session 0x55c8df355880
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034768 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 22749184 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 22749184 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 22749184 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fc687000/0x0/0x4ffc00000, data 0x8d2aab/0x9a3000, compress 0x0/0x0/0x0, omap 0x13a24, meta 0x2bbc5dc), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 137 ms_handle_reset con 0x55c8e1310c00 session 0x55c8e1449500
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 78987264 unmapped: 22683648 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 78987264 unmapped: 22683648 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995644 data_alloc: 218103808 data_used: 12708
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 78987264 unmapped: 22683648 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 78987264 unmapped: 22683648 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 78987264 unmapped: 22683648 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 78987264 unmapped: 22683648 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd4678/0x1a5000, compress 0x0/0x0/0x0, omap 0x142b7, meta 0x2bbbd49), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 78987264 unmapped: 22683648 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.242575645s of 10.370028496s, submitted: 68
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998354 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 21610496 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 21610496 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 21610496 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 21610496 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 21610496 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998354 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 21594112 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 21594112 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 21594112 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 21594112 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 21594112 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998354 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 21594112 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 21594112 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 21594112 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 21594112 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 21594112 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998354 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 21594112 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 21594112 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 21594112 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 21594112 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 21594112 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998354 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 21594112 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 21594112 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 21594112 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 21594112 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 21594112 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998354 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 21577728 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 21577728 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 21577728 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 21577728 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 21577728 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998354 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 21577728 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 21577728 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 21577728 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 21577728 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 21577728 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998354 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 21577728 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 21577728 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 21577728 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 21577728 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 21577728 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998354 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 21577728 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 21577728 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 21577728 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 21577728 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 21577728 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998354 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80109568 unmapped: 21561344 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80109568 unmapped: 21561344 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80109568 unmapped: 21561344 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80109568 unmapped: 21561344 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80109568 unmapped: 21561344 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998354 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80109568 unmapped: 21561344 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80109568 unmapped: 21561344 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80109568 unmapped: 21561344 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80109568 unmapped: 21561344 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80109568 unmapped: 21561344 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998354 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80109568 unmapped: 21561344 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80109568 unmapped: 21561344 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80109568 unmapped: 21561344 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80109568 unmapped: 21561344 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80109568 unmapped: 21561344 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998354 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80109568 unmapped: 21561344 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80109568 unmapped: 21561344 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80109568 unmapped: 21561344 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80109568 unmapped: 21561344 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80109568 unmapped: 21561344 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998354 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 21544960 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 21544960 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 21544960 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 21544960 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 21544960 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998354 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 21544960 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 21544960 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 21544960 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 21544960 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 21544960 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998354 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 21544960 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 21544960 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 21544960 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 21544960 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 21544960 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998354 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 21544960 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 21544960 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 21544960 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 21544960 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 21544960 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998354 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 21528576 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 21528576 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 21520384 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 21520384 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 21520384 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998354 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 21520384 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 21520384 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 21520384 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 21520384 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 21520384 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998354 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 21520384 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 21520384 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 21520384 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 21520384 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 21520384 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998354 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 21520384 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 21520384 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 21520384 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 21520384 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 21520384 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998354 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 21504000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 21504000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 21504000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 21504000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 21504000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998354 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 21504000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 21504000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 21504000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.3 total, 600.0 interval#012Cumulative writes: 6308 writes, 25K keys, 6308 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6308 writes, 1195 syncs, 5.28 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 655 writes, 1714 keys, 655 commit groups, 1.0 writes per commit group, ingest: 0.87 MB, 0.00 MB/s#012Interval WAL: 655 writes, 298 syncs, 2.20 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 21504000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 21504000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998354 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 21504000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 21504000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 21504000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 21504000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 21504000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998354 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 21487616 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 21487616 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 21487616 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 21487616 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: mgrc ms_handle_reset ms_handle_reset con 0x55c8e0b2d400
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2732794987
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2732794987,v1:192.168.122.100:6801/2732794987]
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: mgrc handle_mgr_configure stats_period=5
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80601088 unmapped: 21069824 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998354 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80601088 unmapped: 21069824 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80601088 unmapped: 21069824 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80601088 unmapped: 21069824 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80601088 unmapped: 21069824 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21078016 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998354 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21078016 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21078016 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21078016 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21078016 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21078016 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998354 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21078016 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21078016 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21078016 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21078016 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21078016 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998354 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21078016 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21078016 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21078016 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21078016 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21078016 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998354 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21078016 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21078016 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21078016 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21078016 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21078016 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998354 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21078016 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21078016 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21078016 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21078016 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21078016 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998354 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21078016 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21078016 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21078016 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21078016 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21078016 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998354 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21078016 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21078016 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21078016 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 21078016 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 165.104461670s of 165.116287231s, submitted: 15
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80666624 unmapped: 21004288 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 20946944 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80764928 unmapped: 20905984 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80764928 unmapped: 20905984 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80764928 unmapped: 20905984 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80764928 unmapped: 20905984 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80764928 unmapped: 20905984 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80764928 unmapped: 20905984 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80764928 unmapped: 20905984 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80764928 unmapped: 20905984 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80764928 unmapped: 20905984 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80764928 unmapped: 20905984 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80764928 unmapped: 20905984 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80764928 unmapped: 20905984 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80764928 unmapped: 20905984 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80764928 unmapped: 20905984 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 20897792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 20889600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 20881408 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 20873216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 20873216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 20873216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 20873216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 20873216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 20873216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 20873216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 20873216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 20873216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 20873216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 20873216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 20873216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 20873216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 20873216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 20873216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 20873216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 20873216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 20873216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 20873216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 20873216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 20873216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 20873216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 20873216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 20873216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 20873216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 20873216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 20873216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 20873216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 20873216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 20873216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 20873216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 20873216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 20873216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 20873216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 20873216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 20873216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 20873216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 20873216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 20873216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 20873216 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 20865024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 20865024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 20865024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 20865024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 20865024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 20865024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 20865024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 20865024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 20865024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 20865024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 20865024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 20865024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 20865024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 20865024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 20865024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 20865024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 20865024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 20865024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 20865024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 20865024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 20865024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 20865024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 20865024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 20865024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 20865024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 20865024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 20865024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 20865024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 20865024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 20865024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 20865024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 20865024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 20865024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 20865024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 20865024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 20865024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 20865024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 20865024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 20865024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 20865024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 20865024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 20865024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 20856832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 20856832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 20856832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 20856832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 20856832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 20856832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 20856832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 20856832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 20856832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 20856832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 20856832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 20856832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 20856832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 20856832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 20856832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 20856832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 20856832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 20856832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 20856832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 20856832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 20856832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 20856832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 20856832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 20856832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 20856832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 20856832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 20856832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 20856832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 20856832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 20856832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 20856832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 20856832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 20856832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80822272 unmapped: 20848640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80822272 unmapped: 20848640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80822272 unmapped: 20848640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80822272 unmapped: 20848640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80822272 unmapped: 20848640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80822272 unmapped: 20848640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80822272 unmapped: 20848640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80822272 unmapped: 20848640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80822272 unmapped: 20848640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80822272 unmapped: 20848640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80822272 unmapped: 20848640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80822272 unmapped: 20848640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80822272 unmapped: 20848640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80822272 unmapped: 20848640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80822272 unmapped: 20848640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 20840448 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 20840448 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 20840448 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 20840448 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 20840448 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 20840448 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 20840448 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 20840448 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 20840448 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 20840448 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 20840448 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 20840448 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 20840448 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 20840448 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 20840448 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 20840448 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 20840448 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 20840448 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 20840448 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 20840448 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 20840448 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 20840448 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 20840448 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80838656 unmapped: 20832256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80838656 unmapped: 20832256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80838656 unmapped: 20832256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80838656 unmapped: 20832256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80838656 unmapped: 20832256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80838656 unmapped: 20832256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80838656 unmapped: 20832256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80838656 unmapped: 20832256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80838656 unmapped: 20832256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80838656 unmapped: 20832256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80838656 unmapped: 20832256 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 20815872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 20815872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 20815872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 20815872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 20815872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 20815872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 20815872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 20815872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 20815872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 20815872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 20815872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 20815872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 20815872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 20815872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 20815872 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80887808 unmapped: 20783104 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80887808 unmapped: 20783104 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80887808 unmapped: 20783104 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80887808 unmapped: 20783104 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80887808 unmapped: 20783104 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 20766720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 20766720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 20766720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 20766720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 20766720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 20766720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 20766720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 20766720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 20766720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 20766720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 20766720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 20766720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 20766720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 20766720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 20766720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 20766720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 20766720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 20766720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 20766720 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 20750336 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 20750336 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 20750336 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 20750336 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 20750336 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 20750336 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 20750336 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 20750336 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 20750336 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 20750336 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 20750336 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 20750336 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 20750336 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 20750336 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 20750336 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 20750336 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 20750336 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 20750336 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 20750336 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 20750336 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 20733952 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 20733952 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 20733952 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 20733952 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 20733952 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 20733952 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 20733952 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 20733952 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 20733952 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 20733952 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 20733952 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 20733952 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 20733952 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997634 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 20733952 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 20733952 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 20733952 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fce84000/0x0/0x4ffc00000, data 0xd60f7/0x1a8000, compress 0x0/0x0/0x0, omap 0x145de, meta 0x2bbba22), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 496.185974121s of 496.374053955s, submitted: 114
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 89579520 unmapped: 12091392 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81190912 unmapped: 20480000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047615 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 139 ms_handle_reset con 0x55c8e1311c00 session 0x55c8df2eefc0
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81256448 unmapped: 20414464 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81256448 unmapped: 20414464 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fc67d000/0x0/0x4ffc00000, data 0x8d7cc6/0x9ad000, compress 0x0/0x0/0x0, omap 0x14bb3, meta 0x2bbb44d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 20398080 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 20398080 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 20398080 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fc67d000/0x0/0x4ffc00000, data 0x8d7cc6/0x9ad000, compress 0x0/0x0/0x0, omap 0x14bb3, meta 0x2bbb44d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047615 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 20398080 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 20398080 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 20545536 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 20545536 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fc67d000/0x0/0x4ffc00000, data 0x8d7cc6/0x9ad000, compress 0x0/0x0/0x0, omap 0x14bb3, meta 0x2bbb44d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 20545536 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047615 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fc67d000/0x0/0x4ffc00000, data 0x8d7cc6/0x9ad000, compress 0x0/0x0/0x0, omap 0x14bb3, meta 0x2bbb44d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 20545536 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 20545536 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 20545536 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 20545536 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 20545536 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fc67d000/0x0/0x4ffc00000, data 0x8d7cc6/0x9ad000, compress 0x0/0x0/0x0, omap 0x14bb3, meta 0x2bbb44d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047615 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 20545536 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fc67d000/0x0/0x4ffc00000, data 0x8d7cc6/0x9ad000, compress 0x0/0x0/0x0, omap 0x14bb3, meta 0x2bbb44d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 20545536 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 20545536 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 20545536 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 20545536 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047615 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 20545536 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fc67d000/0x0/0x4ffc00000, data 0x8d7cc6/0x9ad000, compress 0x0/0x0/0x0, omap 0x14bb3, meta 0x2bbb44d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 20545536 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20529152 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20529152 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20529152 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047615 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20529152 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20529152 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fc67d000/0x0/0x4ffc00000, data 0x8d7cc6/0x9ad000, compress 0x0/0x0/0x0, omap 0x14bb3, meta 0x2bbb44d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20529152 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20529152 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20529152 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047615 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20529152 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fc67d000/0x0/0x4ffc00000, data 0x8d7cc6/0x9ad000, compress 0x0/0x0/0x0, omap 0x14bb3, meta 0x2bbb44d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20529152 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20529152 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20529152 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20529152 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047615 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fc67d000/0x0/0x4ffc00000, data 0x8d7cc6/0x9ad000, compress 0x0/0x0/0x0, omap 0x14bb3, meta 0x2bbb44d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20529152 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20529152 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20529152 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20529152 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fc67d000/0x0/0x4ffc00000, data 0x8d7cc6/0x9ad000, compress 0x0/0x0/0x0, omap 0x14bb3, meta 0x2bbb44d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20529152 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047615 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20529152 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20529152 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 20512768 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fc67d000/0x0/0x4ffc00000, data 0x8d7cc6/0x9ad000, compress 0x0/0x0/0x0, omap 0x14bb3, meta 0x2bbb44d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 20512768 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 20512768 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047615 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 20512768 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 20512768 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fc67d000/0x0/0x4ffc00000, data 0x8d7cc6/0x9ad000, compress 0x0/0x0/0x0, omap 0x14bb3, meta 0x2bbb44d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 20512768 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 20512768 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fc67d000/0x0/0x4ffc00000, data 0x8d7cc6/0x9ad000, compress 0x0/0x0/0x0, omap 0x14bb3, meta 0x2bbb44d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 20512768 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047615 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 20512768 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.3 total, 600.0 interval#012Cumulative writes: 6562 writes, 26K keys, 6562 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6562 writes, 1322 syncs, 4.96 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 254 writes, 461 keys, 254 commit groups, 1.0 writes per commit group, ingest: 0.17 MB, 0.00 MB/s#012Interval WAL: 254 writes, 127 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fc67d000/0x0/0x4ffc00000, data 0x8d7cc6/0x9ad000, compress 0x0/0x0/0x0, omap 0x14bb3, meta 0x2bbb44d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 20512768 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fc67d000/0x0/0x4ffc00000, data 0x8d7cc6/0x9ad000, compress 0x0/0x0/0x0, omap 0x14bb3, meta 0x2bbb44d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 20512768 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fc67d000/0x0/0x4ffc00000, data 0x8d7cc6/0x9ad000, compress 0x0/0x0/0x0, omap 0x14bb3, meta 0x2bbb44d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 20512768 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 20512768 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047615 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 20512768 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fc67d000/0x0/0x4ffc00000, data 0x8d7cc6/0x9ad000, compress 0x0/0x0/0x0, omap 0x14bb3, meta 0x2bbb44d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fc67d000/0x0/0x4ffc00000, data 0x8d7cc6/0x9ad000, compress 0x0/0x0/0x0, omap 0x14bb3, meta 0x2bbb44d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 20512768 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 20512768 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 20496384 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 20496384 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 61.390789032s of 62.001358032s, submitted: 13
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046489 data_alloc: 218103808 data_used: 16785
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 20299776 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d7ca3/0x9ac000, compress 0x0/0x0/0x0, omap 0x14d57, meta 0x2bbb2a9), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 140 ms_handle_reset con 0x55c8e1311800 session 0x55c8df354700
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 20267008 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd9883/0x1ae000, compress 0x0/0x0/0x0, omap 0x151a3, meta 0x2bbae5d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 20267008 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 20267008 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 20267008 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd9883/0x1ae000, compress 0x0/0x0/0x0, omap 0x151a3, meta 0x2bbae5d), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008154 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 20267008 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 20267008 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 20267008 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 20267008 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 20267008 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce79000/0x0/0x4ffc00000, data 0xdb302/0x1b1000, compress 0x0/0x0/0x0, omap 0x154c5, meta 0x2bbab3b), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010928 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 20267008 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 20267008 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 20258816 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce79000/0x0/0x4ffc00000, data 0xdb302/0x1b1000, compress 0x0/0x0/0x0, omap 0x154c5, meta 0x2bbab3b), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 20258816 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 20258816 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010928 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 20258816 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce79000/0x0/0x4ffc00000, data 0xdb302/0x1b1000, compress 0x0/0x0/0x0, omap 0x154c5, meta 0x2bbab3b), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 20258816 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 20258816 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 20258816 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 20258816 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010928 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 20258816 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce79000/0x0/0x4ffc00000, data 0xdb302/0x1b1000, compress 0x0/0x0/0x0, omap 0x154c5, meta 0x2bbab3b), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 20258816 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 20258816 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 20258816 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 20258816 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce79000/0x0/0x4ffc00000, data 0xdb302/0x1b1000, compress 0x0/0x0/0x0, omap 0x154c5, meta 0x2bbab3b), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010928 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 20258816 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce79000/0x0/0x4ffc00000, data 0xdb302/0x1b1000, compress 0x0/0x0/0x0, omap 0x154c5, meta 0x2bbab3b), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 20258816 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 20258816 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 20258816 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 20258816 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010928 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 20258816 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce79000/0x0/0x4ffc00000, data 0xdb302/0x1b1000, compress 0x0/0x0/0x0, omap 0x154c5, meta 0x2bbab3b), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce79000/0x0/0x4ffc00000, data 0xdb302/0x1b1000, compress 0x0/0x0/0x0, omap 0x154c5, meta 0x2bbab3b), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 20258816 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce79000/0x0/0x4ffc00000, data 0xdb302/0x1b1000, compress 0x0/0x0/0x0, omap 0x154c5, meta 0x2bbab3b), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 20258816 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 20258816 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce79000/0x0/0x4ffc00000, data 0xdb302/0x1b1000, compress 0x0/0x0/0x0, omap 0x154c5, meta 0x2bbab3b), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 20258816 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010928 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 20258816 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 20258816 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 20258816 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce79000/0x0/0x4ffc00000, data 0xdb302/0x1b1000, compress 0x0/0x0/0x0, omap 0x154c5, meta 0x2bbab3b), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 20258816 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 20258816 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce79000/0x0/0x4ffc00000, data 0xdb302/0x1b1000, compress 0x0/0x0/0x0, omap 0x154c5, meta 0x2bbab3b), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010928 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 20258816 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 20258816 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 42.226173401s of 42.399364471s, submitted: 54
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 20275200 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 20234240 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7b000/0x0/0x4ffc00000, data 0xdb302/0x1b1000, compress 0x0/0x0/0x0, omap 0x154c5, meta 0x2bbab3b), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 20234240 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010208 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81526784 unmapped: 20144128 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81526784 unmapped: 20144128 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81526784 unmapped: 20144128 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7b000/0x0/0x4ffc00000, data 0xdb302/0x1b1000, compress 0x0/0x0/0x0, omap 0x154c5, meta 0x2bbab3b), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81526784 unmapped: 20144128 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81526784 unmapped: 20144128 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010208 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81526784 unmapped: 20144128 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81526784 unmapped: 20144128 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81526784 unmapped: 20144128 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81526784 unmapped: 20144128 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7b000/0x0/0x4ffc00000, data 0xdb302/0x1b1000, compress 0x0/0x0/0x0, omap 0x154c5, meta 0x2bbab3b), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81526784 unmapped: 20144128 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010208 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81526784 unmapped: 20144128 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81526784 unmapped: 20144128 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 20135936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7b000/0x0/0x4ffc00000, data 0xdb302/0x1b1000, compress 0x0/0x0/0x0, omap 0x154c5, meta 0x2bbab3b), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 20135936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 20135936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010208 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7b000/0x0/0x4ffc00000, data 0xdb302/0x1b1000, compress 0x0/0x0/0x0, omap 0x154c5, meta 0x2bbab3b), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 20135936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 20135936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 20135936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7b000/0x0/0x4ffc00000, data 0xdb302/0x1b1000, compress 0x0/0x0/0x0, omap 0x154c5, meta 0x2bbab3b), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 20135936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7b000/0x0/0x4ffc00000, data 0xdb302/0x1b1000, compress 0x0/0x0/0x0, omap 0x154c5, meta 0x2bbab3b), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 20135936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010208 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 20135936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 20135936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7b000/0x0/0x4ffc00000, data 0xdb302/0x1b1000, compress 0x0/0x0/0x0, omap 0x154c5, meta 0x2bbab3b), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 20135936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7b000/0x0/0x4ffc00000, data 0xdb302/0x1b1000, compress 0x0/0x0/0x0, omap 0x154c5, meta 0x2bbab3b), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 20135936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 20135936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010208 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 20135936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7b000/0x0/0x4ffc00000, data 0xdb302/0x1b1000, compress 0x0/0x0/0x0, omap 0x154c5, meta 0x2bbab3b), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 20135936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 20135936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 20135936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 20135936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010208 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 20135936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 20135936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7b000/0x0/0x4ffc00000, data 0xdb302/0x1b1000, compress 0x0/0x0/0x0, omap 0x154c5, meta 0x2bbab3b), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 20135936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 20135936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7b000/0x0/0x4ffc00000, data 0xdb302/0x1b1000, compress 0x0/0x0/0x0, omap 0x154c5, meta 0x2bbab3b), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 20135936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010208 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 20135936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 20135936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7b000/0x0/0x4ffc00000, data 0xdb302/0x1b1000, compress 0x0/0x0/0x0, omap 0x154c5, meta 0x2bbab3b), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 20135936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 20135936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 20135936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010208 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 20135936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 20135936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 20135936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7b000/0x0/0x4ffc00000, data 0xdb302/0x1b1000, compress 0x0/0x0/0x0, omap 0x154c5, meta 0x2bbab3b), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 20135936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 20135936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7b000/0x0/0x4ffc00000, data 0xdb302/0x1b1000, compress 0x0/0x0/0x0, omap 0x154c5, meta 0x2bbab3b), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010208 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 20135936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 20135936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7b000/0x0/0x4ffc00000, data 0xdb302/0x1b1000, compress 0x0/0x0/0x0, omap 0x154c5, meta 0x2bbab3b), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 20135936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 20135936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7b000/0x0/0x4ffc00000, data 0xdb302/0x1b1000, compress 0x0/0x0/0x0, omap 0x154c5, meta 0x2bbab3b), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7b000/0x0/0x4ffc00000, data 0xdb302/0x1b1000, compress 0x0/0x0/0x0, omap 0x154c5, meta 0x2bbab3b), peers [0,1] op hist [])
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 20135936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010208 data_alloc: 218103808 data_used: 16769
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 19963904 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: do_command 'config diff' '{prefix=config diff}'
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: do_command 'config show' '{prefix=config show}'
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 19750912 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: do_command 'counter dump' '{prefix=counter dump}'
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: do_command 'counter schema' '{prefix=counter schema}'
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 19529728 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 19251200 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:25 np0005603787 ceph-osd[87996]: do_command 'log dump' '{prefix=log dump}'
Jan 31 05:40:25 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14656 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:40:25 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.nqlmbk", "name": "rgw_frontends"} v 0)
Jan 31 05:40:25 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.nqlmbk", "name": "rgw_frontends"} : dispatch
Jan 31 05:40:26 np0005603787 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 05:40:26 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Jan 31 05:40:26 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1933967848' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Jan 31 05:40:26 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14660 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:40:26 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.nqlmbk", "name": "rgw_frontends"} v 0)
Jan 31 05:40:26 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2116645141' entity='mgr.compute-0.mdmqaq' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.nqlmbk", "name": "rgw_frontends"} : dispatch
Jan 31 05:40:27 np0005603787 nova_compute[238603]: 2026-01-31 10:40:27.101 238607 DEBUG oslo_service.periodic_task [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 05:40:27 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1413: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:40:27 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14664 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:40:27 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 31 05:40:27 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1313432612' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 31 05:40:27 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14666 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:40:28 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 31 05:40:28 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2766886601' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 31 05:40:28 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 05:40:28 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14670 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:40:28 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 31 05:40:28 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2645171795' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 31 05:40:28 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14674 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 05:40:28 np0005603787 nova_compute[238603]: 2026-01-31 10:40:28.992 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:40:28 np0005603787 nova_compute[238603]: 2026-01-31 10:40:28.992 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:40:28 np0005603787 nova_compute[238603]: 2026-01-31 10:40:28.992 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 05:40:28 np0005603787 nova_compute[238603]: 2026-01-31 10:40:28.992 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 05:40:28 np0005603787 nova_compute[238603]: 2026-01-31 10:40:28.993 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:40:29 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 31 05:40:29 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/253842726' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 31 05:40:29 np0005603787 ceph-mgr[75453]: log_channel(cluster) log [DBG] : pgmap v1414: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:40:29 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14678 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 05:40:29 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 05:40:29 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/589936611' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 05:40:29 np0005603787 nova_compute[238603]: 2026-01-31 10:40:29.585 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 05:40:29 np0005603787 nova_compute[238603]: 2026-01-31 10:40:29.743 238607 WARNING nova.virt.libvirt.driver [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 05:40:29 np0005603787 nova_compute[238603]: 2026-01-31 10:40:29.744 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4900MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 05:40:29 np0005603787 nova_compute[238603]: 2026-01-31 10:40:29.744 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 05:40:29 np0005603787 nova_compute[238603]: 2026-01-31 10:40:29.744 238607 DEBUG oslo_concurrency.lockutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 05:40:29 np0005603787 ceph-mon[75160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 31 05:40:29 np0005603787 ceph-mon[75160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2090007338' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 31 05:40:29 np0005603787 ceph-mgr[75453]: log_channel(audit) log [DBG] : from='client.14684 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 05:40:29 np0005603787 nova_compute[238603]: 2026-01-31 10:40:29.859 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 05:40:29 np0005603787 nova_compute[238603]: 2026-01-31 10:40:29.859 238607 DEBUG nova.compute.resource_tracker [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 974848 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 974848 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 966656 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 966656 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 82919424 unmapped: 958464 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 82919424 unmapped: 958464 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 82919424 unmapped: 958464 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 950272 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 950272 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 82935808 unmapped: 942080 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 82935808 unmapped: 942080 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 82935808 unmapped: 942080 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 82944000 unmapped: 933888 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 925696 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 925696 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 82960384 unmapped: 917504 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 82960384 unmapped: 917504 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 82960384 unmapped: 917504 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 82968576 unmapped: 909312 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 82968576 unmapped: 909312 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 82976768 unmapped: 901120 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 82976768 unmapped: 901120 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 82976768 unmapped: 901120 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 82984960 unmapped: 892928 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 82984960 unmapped: 892928 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 884736 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 884736 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83001344 unmapped: 876544 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83001344 unmapped: 876544 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83001344 unmapped: 876544 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 868352 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 868352 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83017728 unmapped: 860160 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83017728 unmapped: 860160 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83025920 unmapped: 851968 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83025920 unmapped: 851968 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83025920 unmapped: 851968 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 843776 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 835584 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 835584 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 827392 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 827392 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 827392 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 819200 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 802816 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 794624 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 786432 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 778240 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 778240 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 770048 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 761856 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 753664 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 753664 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 745472 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 745472 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 745472 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 737280 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 720896 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 712704 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 704512 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 704512 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 696320 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 696320 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83189760 unmapped: 688128 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83189760 unmapped: 688128 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 679936 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 671744 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 671744 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 663552 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 663552 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 663552 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 655360 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 655360 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 647168 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 602.2 total, 600.0 interval#012Cumulative writes: 6897 writes, 28K keys, 6897 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6897 writes, 1298 syncs, 5.31 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6897 writes, 28K keys, 6897 commit groups, 1.0 writes per commit group, ingest: 19.85 MB, 0.03 MB/s#012Interval WAL: 6897 writes, 1298 syncs, 5.31 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 602.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558daf9bba30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 602.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558daf9bba30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 602.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 581632 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 581632 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 573440 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 573440 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 565248 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83320832 unmapped: 557056 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83329024 unmapped: 548864 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83329024 unmapped: 548864 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83329024 unmapped: 548864 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 540672 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 540672 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 532480 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 524288 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 524288 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 516096 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 516096 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 507904 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 507904 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 507904 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 499712 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 499712 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 491520 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 491520 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 491520 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 483328 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 483328 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 483328 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 475136 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 475136 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 466944 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 466944 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 458752 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 458752 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 458752 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 450560 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 442368 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 434176 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 434176 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 434176 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 425984 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 425984 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 417792 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 417792 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 409600 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 409600 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 409600 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 401408 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 401408 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 393216 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 393216 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 393216 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 385024 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 385024 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 376832 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 376832 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 368640 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 360448 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 360448 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 308.716094971s of 308.727294922s, submitted: 6
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 1400832 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 303104 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 286720 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 278528 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 278528 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 278528 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84656128 unmapped: 270336 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84656128 unmapped: 270336 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 262144 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 262144 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 262144 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 253952 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 253952 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 245760 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 245760 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 237568 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 229376 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 221184 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 212992 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 212992 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84721664 unmapped: 204800 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84721664 unmapped: 204800 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84738048 unmapped: 188416 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 180224 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 180224 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 172032 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 172032 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 172032 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 163840 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 163840 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 155648 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 155648 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 147456 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 147456 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 139264 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 139264 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 139264 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84795392 unmapped: 131072 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84795392 unmapped: 131072 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 122880 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 122880 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 122880 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84811776 unmapped: 114688 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84811776 unmapped: 114688 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84819968 unmapped: 106496 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84819968 unmapped: 106496 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84819968 unmapped: 106496 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 98304 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 98304 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 98304 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84836352 unmapped: 90112 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84836352 unmapped: 90112 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 81920 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 81920 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84852736 unmapped: 73728 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84852736 unmapped: 73728 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84852736 unmapped: 73728 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84869120 unmapped: 57344 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84877312 unmapped: 49152 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84877312 unmapped: 49152 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84877312 unmapped: 49152 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84885504 unmapped: 40960 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84885504 unmapped: 40960 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84885504 unmapped: 40960 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84885504 unmapped: 40960 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84885504 unmapped: 40960 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84885504 unmapped: 40960 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84885504 unmapped: 40960 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84885504 unmapped: 40960 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 32768 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 32768 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 32768 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 32768 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 32768 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 32768 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 32768 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 32768 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 32768 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 nova_compute[238603]: 2026-01-31 10:40:29.882 238607 DEBUG oslo_concurrency.processutils [None req-cf59fee4-e98a-4174-b22b-db50085c5e28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 32768 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 32768 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 32768 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 32768 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 32768 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 32768 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 32768 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 32768 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 32768 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 32768 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 32768 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 32768 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 32768 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 32768 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 32768 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 24576 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 24576 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 24576 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 24576 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 24576 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 24576 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 24576 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 24576 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 24576 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 24576 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 24576 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 24576 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 24576 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 24576 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 24576 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 24576 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 24576 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 24576 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 24576 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 24576 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 24576 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 24576 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 24576 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 24576 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 24576 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 24576 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 24576 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 24576 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 24576 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 16384 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 16384 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 16384 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 16384 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 16384 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 16384 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 16384 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 16384 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 16384 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 16384 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 16384 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 16384 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 16384 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 16384 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 16384 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 16384 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 16384 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 16384 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 16384 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 16384 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 16384 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 16384 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 16384 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 16384 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84918272 unmapped: 8192 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84918272 unmapped: 8192 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84918272 unmapped: 8192 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84918272 unmapped: 8192 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84918272 unmapped: 8192 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84918272 unmapped: 8192 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84918272 unmapped: 8192 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84918272 unmapped: 8192 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84918272 unmapped: 8192 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84918272 unmapped: 8192 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84918272 unmapped: 8192 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84918272 unmapped: 8192 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84918272 unmapped: 8192 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84918272 unmapped: 8192 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84918272 unmapped: 8192 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84918272 unmapped: 8192 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84918272 unmapped: 8192 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84918272 unmapped: 8192 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84918272 unmapped: 8192 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84918272 unmapped: 8192 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 0 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 0 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 0 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 0 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 0 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 0 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 0 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 0 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 0 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 0 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 0 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 0 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 0 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 0 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 0 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 0 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 0 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 0 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 0 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 0 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 0 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 0 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 0 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 0 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 0 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 0 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 0 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 0 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 0 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 0 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 0 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 0 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 1040384 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 1040384 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 1040384 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 1040384 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 1040384 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 1040384 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 1040384 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 1040384 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 1040384 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 1040384 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 1040384 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 1040384 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 1040384 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 1040384 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 1040384 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 1040384 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 1040384 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 1040384 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 1040384 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 1040384 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 1040384 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 1040384 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 1040384 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 1040384 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 1040384 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 1032192 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 1032192 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 1032192 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 1032192 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 1032192 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 1032192 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 1032192 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 1032192 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 1032192 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 1032192 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 1032192 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 1032192 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 1032192 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 1032192 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 1032192 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 1032192 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 1032192 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 1032192 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 1032192 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 1032192 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 1032192 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 1032192 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 1032192 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84951040 unmapped: 1024000 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84951040 unmapped: 1024000 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84951040 unmapped: 1024000 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84951040 unmapped: 1024000 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 84951040 unmapped: 1024000 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: mgrc ms_handle_reset ms_handle_reset con 0x558db17cc000
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2732794987
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2732794987,v1:192.168.122.100:6801/2732794987]
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: mgrc handle_mgr_configure stats_period=5
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 638976 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 ms_handle_reset con 0x558db1d94400 session 0x558db142f500
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 ms_handle_reset con 0x558db1d95c00 session 0x558db1d5f500
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 786432 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 786432 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 786432 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 778240 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 299.303436279s of 300.103332520s, submitted: 90
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 770048 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 761856 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 761856 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 761856 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 761856 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 761856 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 761856 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 761856 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 761856 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 753664 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 753664 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 745472 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 737280 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 737280 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 737280 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 737280 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 737280 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 737280 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 737280 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 737280 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 737280 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 737280 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 737280 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 737280 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 737280 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 737280 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 737280 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 737280 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 737280 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 737280 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 737280 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 737280 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 737280 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 868352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 868352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 868352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 868352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 868352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 868352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 868352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 868352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 868352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 868352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 868352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 868352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 868352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 868352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 868352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 868352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 868352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 868352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 868352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 868352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 868352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 868352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 868352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 868352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 860160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1202.2 total, 600.0 interval#012Cumulative writes: 7121 writes, 29K keys, 7121 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 7121 writes, 1410 syncs, 5.05 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 224 writes, 337 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1202.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558daf9bba30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1202.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558daf9bba30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1202.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 827392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 827392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 827392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 827392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 827392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 827392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 827392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 827392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 827392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 827392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 827392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 827392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 827392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 827392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 827392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 827392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 827392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 827392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 827392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 827392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 827392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 827392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 827392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 827392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007985 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 827392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 827392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fce3b000/0x0/0x4ffc00000, data 0x12e826/0x1f1000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 827392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 827392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 270.696594238s of 270.739715576s, submitted: 22
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 679936 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013147 data_alloc: 218103808 data_used: 10548
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 125 handle_osd_map epochs [125,126], i have 125, src has [1,126]
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 671744 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 17448960 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 127 ms_handle_reset con 0x558db3e5b400 session 0x558db3a55a40
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fc633000/0x0/0x4ffc00000, data 0x931fd5/0x9f9000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fc633000/0x0/0x4ffc00000, data 0x931fd5/0x9f9000, compress 0x0/0x0/0x0, omap 0x13e68, meta 0x2bbc198), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 17334272 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 17334272 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 128 ms_handle_reset con 0x558db3e5b800 session 0x558db4145880
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 17334272 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1064771 data_alloc: 218103808 data_used: 11180
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 17334272 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 17334272 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 17317888 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 128 heartbeat osd_stat(store_statfs(0x4fc62b000/0x0/0x4ffc00000, data 0x935745/0x9ff000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 17317888 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 128 handle_osd_map epochs [128,129], i have 129, src has [1,129]
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 17317888 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067673 data_alloc: 218103808 data_used: 11765
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 17317888 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 17317888 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 129 heartbeat osd_stat(store_statfs(0x4fc628000/0x0/0x4ffc00000, data 0x9371c4/0xa02000, compress 0x0/0x0/0x0, omap 0x1464d, meta 0x2bbb9b3), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 17186816 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.597187042s of 13.864180565s, submitted: 40
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 17178624 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 130 ms_handle_reset con 0x558db3db1800 session 0x558db1d5f6c0
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fc626000/0x0/0x4ffc00000, data 0x938da4/0xa04000, compress 0x0/0x0/0x0, omap 0x14903, meta 0x2bbb6fd), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 17178624 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029566 data_alloc: 218103808 data_used: 11765
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 17178624 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 130 handle_osd_map epochs [130,131], i have 130, src has [1,131]
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 17047552 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 131 ms_handle_reset con 0x558db3db0400 session 0x558db3a4ca80
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fce23000/0x0/0x4ffc00000, data 0x13a994/0x207000, compress 0x0/0x0/0x0, omap 0x14bbb, meta 0x2bbb445), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 17358848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 17358848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 17358848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fce20000/0x0/0x4ffc00000, data 0x13c42f/0x20a000, compress 0x0/0x0/0x0, omap 0x14eed, meta 0x2bbb113), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1035114 data_alloc: 218103808 data_used: 11765
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 17358848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 17358848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fce20000/0x0/0x4ffc00000, data 0x13c42f/0x20a000, compress 0x0/0x0/0x0, omap 0x14eed, meta 0x2bbb113), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 17342464 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 17342464 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fce20000/0x0/0x4ffc00000, data 0x13c42f/0x20a000, compress 0x0/0x0/0x0, omap 0x14eed, meta 0x2bbb113), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 17334272 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1035242 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 17334272 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.252259254s of 12.421405792s, submitted: 45
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 17317888 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 17317888 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1d000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 17317888 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1d000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 17383424 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread fragmentation_score=0.000142 took=0.000293s
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 17367040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 17358848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 17358848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 17358848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 17358848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 17358848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 17358848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 17358848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 17358848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 17358848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 17358848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 17358848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 17358848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 17358848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 17358848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 17358848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 17358848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 17358848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 17358848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 17358848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 17358848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 17358848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 17358848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 17358848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 17358848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 17358848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 17358848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 17358848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 17358848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 17358848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 17358848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 17358848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 17358848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 17358848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 17358848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 17358848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 17350656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 17350656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 17350656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 17350656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 17350656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 17350656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 17350656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 17350656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 17350656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 17350656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 17350656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 17350656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 17350656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 17350656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 17350656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 17350656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 17350656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 17350656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 17350656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 17350656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 17350656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 17350656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 17350656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 17350656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 17350656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 17350656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 17350656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 17350656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 17350656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 17350656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 17350656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 17350656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 17350656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 17350656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 17350656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 17350656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 17342464 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 17342464 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 17342464 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 17342464 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 17342464 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 17342464 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 17342464 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 17342464 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 17342464 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 17342464 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 17342464 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 17342464 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 17342464 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 17342464 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 17342464 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 17342464 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 17342464 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 17342464 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 17342464 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 17342464 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 17342464 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 17342464 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 17342464 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 17342464 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 17342464 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 17342464 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 17342464 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 17342464 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 17334272 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 17334272 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 17334272 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 17334272 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 17334272 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 17334272 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 17334272 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 17334272 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 17334272 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 17334272 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 17334272 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 17334272 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 17334272 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 17334272 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 17334272 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 17334272 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 17334272 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 17334272 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 17334272 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 17334272 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 17334272 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 17334272 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 17326080 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 17326080 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 17326080 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 17326080 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 17326080 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 17326080 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 17326080 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 17326080 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 17326080 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 17326080 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 17326080 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 17317888 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 17317888 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 17317888 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 17317888 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 17317888 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 17317888 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 17309696 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 17309696 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 17309696 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 17309696 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 17309696 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 17309696 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 17309696 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 17309696 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 17309696 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 17309696 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 17309696 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 17309696 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 17309696 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 17309696 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 17309696 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 17309696 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 17309696 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 17309696 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 17309696 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 17309696 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 17309696 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 17309696 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 17309696 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 17309696 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 17309696 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 17309696 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 17309696 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 17309696 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 17309696 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 17309696 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 17309696 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 17309696 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 17309696 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 17309696 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 17309696 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 17309696 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 17309696 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85458944 unmapped: 17301504 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85458944 unmapped: 17301504 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85458944 unmapped: 17301504 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85458944 unmapped: 17301504 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85458944 unmapped: 17301504 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85458944 unmapped: 17301504 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85458944 unmapped: 17301504 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85458944 unmapped: 17301504 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85458944 unmapped: 17301504 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85458944 unmapped: 17301504 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85458944 unmapped: 17301504 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85458944 unmapped: 17301504 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85458944 unmapped: 17301504 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85458944 unmapped: 17301504 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 17432576 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 17432576 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 17432576 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 17432576 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 17432576 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 17432576 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 17432576 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 17432576 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 17432576 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 17432576 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 17432576 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 17432576 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 17432576 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 17432576 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 17432576 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 17432576 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 17432576 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 17432576 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 17432576 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 17432576 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 17432576 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 17432576 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 17432576 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 17432576 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 17432576 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 17432576 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 17432576 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 17432576 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 17432576 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 17432576 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 17432576 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 17432576 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 17424384 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 17424384 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 17424384 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 17424384 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 17424384 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 17424384 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 17424384 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 17424384 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 17424384 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 17424384 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 17424384 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 17424384 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 17424384 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 17424384 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 17424384 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 17424384 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 17424384 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 17424384 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 17424384 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 17424384 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 17416192 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 17416192 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 17416192 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 17416192 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 17416192 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 17416192 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 17416192 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 17416192 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 17416192 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 17416192 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 17416192 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 17416192 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 17416192 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 17416192 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 17416192 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 17416192 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 17416192 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 17416192 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 17416192 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 17416192 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 17416192 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 17416192 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 17416192 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 17416192 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 17416192 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 17416192 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 17416192 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 17416192 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 17416192 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 17408000 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 17408000 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 17408000 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 17408000 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 17408000 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 17408000 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 17408000 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 17408000 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 17408000 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 17408000 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 17408000 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 17408000 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 17408000 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 17408000 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037296 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 17408000 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce1f000/0x0/0x4ffc00000, data 0x13deae/0x20d000, compress 0x0/0x0/0x0, omap 0x151a4, meta 0x2bbae5c), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 17408000 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 17408000 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 17408000 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 414.616577148s of 414.897613525s, submitted: 104
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 24649728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 133 handle_osd_map epochs [133,134], i have 134, src has [1,134]
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 134 ms_handle_reset con 0x558db10ed400 session 0x558db429afc0
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111214 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 86532096 unmapped: 24625152 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 135 ms_handle_reset con 0x558db3db1800 session 0x558db4144000
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 86532096 unmapped: 24625152 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fc1a2000/0x0/0x4ffc00000, data 0xdb1629/0xe86000, compress 0x0/0x0/0x0, omap 0x15a6e, meta 0x2bba592), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 86564864 unmapped: 24592384 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 86564864 unmapped: 24592384 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fc1a2000/0x0/0x4ffc00000, data 0xdb1629/0xe86000, compress 0x0/0x0/0x0, omap 0x15a6e, meta 0x2bba592), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 24559616 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116190 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 24559616 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 24559616 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 24559616 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fc1a2000/0x0/0x4ffc00000, data 0xdb1629/0xe86000, compress 0x0/0x0/0x0, omap 0x15a6e, meta 0x2bba592), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 24559616 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 24559616 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116190 data_alloc: 218103808 data_used: 12378
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 24559616 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.647210121s of 12.738684654s, submitted: 29
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 86622208 unmapped: 24535040 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 136 ms_handle_reset con 0x558db3dad800 session 0x558db3e6f6c0
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc1a1000/0x0/0x4ffc00000, data 0xdb3219/0xe89000, compress 0x0/0x0/0x0, omap 0x15f0b, meta 0x2bba0f5), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 86384640 unmapped: 24772608 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 86384640 unmapped: 24772608 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 86384640 unmapped: 24772608 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 137 ms_handle_reset con 0x558db3dadc00 session 0x558db3a541c0
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1055624 data_alloc: 218103808 data_used: 16439
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 23576576 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 23576576 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce11000/0x0/0x4ffc00000, data 0x144dc6/0x219000, compress 0x0/0x0/0x0, omap 0x1634d, meta 0x2bb9cb3), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1055624 data_alloc: 218103808 data_used: 16439
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce11000/0x0/0x4ffc00000, data 0x144dc6/0x219000, compress 0x0/0x0/0x0, omap 0x1634d, meta 0x2bb9cb3), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.299363136s of 10.411116600s, submitted: 67
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058398 data_alloc: 218103808 data_used: 16439
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058398 data_alloc: 218103808 data_used: 16439
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058398 data_alloc: 218103808 data_used: 16439
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058398 data_alloc: 218103808 data_used: 16439
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058398 data_alloc: 218103808 data_used: 16439
Jan 31 05:40:29 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058398 data_alloc: 218103808 data_used: 16439
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058398 data_alloc: 218103808 data_used: 16439
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058398 data_alloc: 218103808 data_used: 16439
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058398 data_alloc: 218103808 data_used: 16439
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058398 data_alloc: 218103808 data_used: 16439
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058398 data_alloc: 218103808 data_used: 16439
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058398 data_alloc: 218103808 data_used: 16439
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058398 data_alloc: 218103808 data_used: 16439
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058398 data_alloc: 218103808 data_used: 16439
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058398 data_alloc: 218103808 data_used: 16439
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058398 data_alloc: 218103808 data_used: 16439
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058398 data_alloc: 218103808 data_used: 16439
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058398 data_alloc: 218103808 data_used: 16439
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058398 data_alloc: 218103808 data_used: 16439
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058398 data_alloc: 218103808 data_used: 16439
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058398 data_alloc: 218103808 data_used: 16439
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1802.2 total, 600.0 interval#012Cumulative writes: 7664 writes, 30K keys, 7664 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7664 writes, 1654 syncs, 4.63 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 543 writes, 1415 keys, 543 commit groups, 1.0 writes per commit group, ingest: 0.73 MB, 0.00 MB/s#012Interval WAL: 543 writes, 244 syncs, 2.23 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058398 data_alloc: 218103808 data_used: 16439
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058398 data_alloc: 218103808 data_used: 16439
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 23560192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 ms_handle_reset con 0x558db0dc4800 session 0x558daf9e0000
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: mgrc ms_handle_reset ms_handle_reset con 0x558db10ecc00
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2732794987
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2732794987,v1:192.168.122.100:6801/2732794987]
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: mgrc handle_mgr_configure stats_period=5
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87851008 unmapped: 23306240 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 ms_handle_reset con 0x558db1d95800 session 0x558db429bc00
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 ms_handle_reset con 0x558db1d94400 session 0x558db3a4c000
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87851008 unmapped: 23306240 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058398 data_alloc: 218103808 data_used: 16439
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87851008 unmapped: 23306240 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87851008 unmapped: 23306240 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87851008 unmapped: 23306240 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87851008 unmapped: 23306240 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87851008 unmapped: 23306240 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058398 data_alloc: 218103808 data_used: 16439
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87851008 unmapped: 23306240 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87851008 unmapped: 23306240 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87851008 unmapped: 23306240 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87851008 unmapped: 23306240 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87851008 unmapped: 23306240 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058398 data_alloc: 218103808 data_used: 16439
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87851008 unmapped: 23306240 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87851008 unmapped: 23306240 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87851008 unmapped: 23306240 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87851008 unmapped: 23306240 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87851008 unmapped: 23306240 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058398 data_alloc: 218103808 data_used: 16439
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87851008 unmapped: 23306240 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058398 data_alloc: 218103808 data_used: 16439
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058398 data_alloc: 218103808 data_used: 16439
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058398 data_alloc: 218103808 data_used: 16439
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058398 data_alloc: 218103808 data_used: 16439
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 ms_handle_reset con 0x558db1282c00 session 0x558db1476700
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058398 data_alloc: 218103808 data_used: 16439
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058398 data_alloc: 218103808 data_used: 16439
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce0e000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 165.108154297s of 165.124740601s, submitted: 14
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87883776 unmapped: 23273472 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87891968 unmapped: 23265280 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce10000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057678 data_alloc: 218103808 data_used: 16439
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce10000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057678 data_alloc: 218103808 data_used: 16439
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce10000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057678 data_alloc: 218103808 data_used: 16439
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce10000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fce10000/0x0/0x4ffc00000, data 0x146845/0x21c000, compress 0x0/0x0/0x0, omap 0x165fb, meta 0x2bb9a05), peers [0,2] op hist [])
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057678 data_alloc: 218103808 data_used: 16439
Jan 31 05:40:30 np0005603787 ceph-osd[86934]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 23298048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
